Jump to content
Калькуляторы

[РЕШЕНО] FreeBSD LA при трафике выше 900Мбит/с Растет LA, когда траф упирается в 1G

Update 2013-06-09

Проблемы совсем не стало после перехода на 10G (Карточка Intel X520-DA2)

 

Update 2013-02-14

 

Извините за монолог, но может кому-то будет полезно.

 

Проблема ушла, LA 5 мин не выше 2 при трафике выше 900Мбит, pps ~200K.

Мероприятия проделаны:

 

1. Заменены все правила вида

add 1000 pipe 100 ip from any to <subnet>

на 1 правило вида

add 1000 pipe tablearg ip from any to table(100)

Спасибо boco.

 

2. Заменена конфигурация пайпов с

pipe config bw 2048k queue <bw * 3 / 8>k

на

pipe config bw 2048k

Именно после этого изменения, было значительное снижение LA в момент увеличения скорости по тарифам.

 

3. Общие моменты:

# ipfw nat show | wc -l
     16
# ipfw show | wc -l
     27
# ipfw list
00050 skipto 300 ip from any to any in recv lan
00051 skipto 400 ip from any to any out xmit wan
00052 skipto 500 ip from any to any in recv wan
00053 skipto 600 ip from any to any out xmit lan
00054 allow ip from any to any in recv lo0
00055 allow ip from any to any out xmit lo0
...

 

 

Первоначальная проблема:

Здравствуйте, наблюдается странная ситуация, в момент, когда трафик начинает превышать 900Мбит/с, начинает расти LA. На графике видно.

Имеем:

 

# uname -a
FreeBSD NAT 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Sat Oct  8 16:37:12 MSD 2011     root@nat:/usr/obj/usr/src/sys/NGW20111006  amd64
# netstat -w 1 -h
           input        (Total)           output
  packets  errs idrops      bytes    packets  errs      bytes colls
     182K     0     0       139M       182K     0       147M     0
     175K     0     0       132M       175K     0       138M     0
     181K     0     0       139M       181K     0       155M     0
     183K     0     0       139M       184K     0       160M     0
     183K     0     0       139M       183K     0       155M     0
     184K     0     0       141M       185K     0       159M     0
# top -SIP
last pid: 78552;  load averages: 1.25,  0.79,  0.74
up 0+21:32:25  17:34:37
343 processes: 26 running, 224 sleeping, 1 zombie, 92 waiting
CPU 0:   0.4% user,  0.0% nice,  0.4% system, 26.7% interrupt, 72.6% idle
CPU 1:   0.4% user,  0.0% nice,  0.0% system, 23.3% interrupt, 76.3% idle
CPU 2:   0.0% user,  0.0% nice,  0.8% system, 30.8% interrupt, 68.4% idle
CPU 3:   0.4% user,  0.0% nice,  0.0% system, 26.7% interrupt, 72.9% idle
CPU 4:   0.0% user,  0.0% nice,  0.4% system, 32.7% interrupt, 66.9% idle
CPU 5:   0.0% user,  0.0% nice,  0.0% system, 25.2% interrupt, 74.8% idle
CPU 6:   0.0% user,  0.0% nice,  0.0% system, 31.2% interrupt, 68.8% idle
CPU 7:   0.0% user,  0.0% nice,  0.0% system, 42.5% interrupt, 57.5% idle
CPU 8:   0.4% user,  0.0% nice, 10.2% system,  0.0% interrupt, 89.5% idle
CPU 9:   0.4% user,  0.0% nice, 10.5% system,  0.0% interrupt, 89.1% idle
CPU 10:  0.4% user,  0.0% nice,  7.5% system,  0.0% interrupt, 92.1% idle
CPU 11:  0.0% user,  0.0% nice, 11.3% system,  0.0% interrupt, 88.7% idle
CPU 12:  7.5% user,  0.0% nice, 13.9% system,  0.0% interrupt, 78.6% idle
CPU 13:  4.9% user,  0.0% nice,  7.1% system,  0.0% interrupt, 88.0% idle
CPU 14:  0.4% user,  0.0% nice, 10.9% system,  0.0% interrupt, 88.8% idle
CPU 15:  5.3% user,  0.0% nice,  9.4% system,  0.0% interrupt, 85.3% idle
CPU 16:  4.1% user,  0.0% nice,  6.8% system,  0.0% interrupt, 89.1% idle
CPU 17: 10.5% user,  0.0% nice,  9.4% system,  0.0% interrupt, 80.1% idle
CPU 18:  1.9% user,  0.0% nice, 14.7% system,  0.0% interrupt, 83.5% idle
CPU 19:  3.0% user,  0.0% nice,  8.6% system,  0.0% interrupt, 88.3% idle
CPU 20:  0.0% user,  0.0% nice,  6.8% system,  0.0% interrupt, 93.2% idle
CPU 21:  5.6% user,  0.0% nice,  0.4% system,  0.0% interrupt, 94.0% idle
CPU 22:  0.8% user,  0.0% nice,  5.6% system,  0.0% interrupt, 93.6% idle
CPU 23:  0.4% user,  0.0% nice,  6.4% system,  0.0% interrupt, 93.2% idle
Mem: 453M Active, 144M Inact, 938M Wired, 3020K Cache, 822M Buf, 6364M Free
Swap: 4096M Total, 4096M Free

 PID USERNAME   THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
  11 root        24 171 ki31     0K   384K CPU0    0 475.8H 2032.03% idle
  12 root        93 -68    -     0K  1488K WAIT    7  25.7H 245.17% intr
   0 root        56 -68    0     0K   880K -      20 407:14 160.60% kernel
1572 nobody       1  60    0   203M   200M select 12 189:16 24.46% softflowd
# vmstat -z
...
64 Bucket:                536,        0,      789,        2,      789,      184
128 Bucket:              1048,        0,     2013,        0,     2013,     3309
...

 

Сервисы:

 


  •  
  • NAT
  • Dummynet
  • OSPF
  • Softflow
     

Вопрос, собственно, к тому, что планируется переезд на 10G. И надо понять, что произойдет, когда трафик перевалит за 1G.

post-87545-077514800 1358344089_thumb.png

Edited by a-zazell

Share this post


Link to post
Share on other sites

Выключить HT, включить isr.

 

Нашел топик, очень похожа конфигурация на мою. Там netisr 4 из 8 работало.

Еще заметка Сысоева по net.isr в 8ке http://goo.gl/lpdBJ.

 

Понимаю, что, имея:

 

# sysctl -a | grep isr
net.isr.numthreads: 1
net.isr.defaultqlimit: 256
net.isr.maxqlimit: 10240
net.isr.bindthreads: 0
net.isr.maxthreads: 1
net.isr.direct: 1
net.isr.direct_force: 1
net.route.netisr_maxqlen: 256

Надо:

 

# sysctl -a | grep isr
...
net.isr.maxthreads: 16
net.isr.direct: 0
net.isr.direct_force: 0
net.route.netisr_maxqlen: 1024

 

Курю дальше ...

Share this post


Link to post
Share on other sites

Выключить HT, включить isr.

 

Нашел топик, очень похожа конфигурация на мою. Там netisr 4 из 8 работало.

Еще заметка Сысоева по net.isr в 8ке http://goo.gl/lpdBJ.

 

Понимаю, что, имея:

 

# sysctl -a | grep isr
net.isr.numthreads: 1
net.isr.defaultqlimit: 256
net.isr.maxqlimit: 10240
net.isr.bindthreads: 0
net.isr.maxthreads: 1
net.isr.direct: 1
net.isr.direct_force: 1
net.route.netisr_maxqlen: 256

Надо:

 

# sysctl -a | grep isr
...
net.isr.maxthreads: 16
net.isr.direct: 0
net.isr.direct_force: 0
net.route.netisr_maxqlen: 1024

 

Курю дальше ...

> /boot/loader.conf:

> net.isr.bindthreads=4

> net.isr.maxthreads=1

> net.isr.defaultqlimit=4096

 

> /etc/sysctl.conf:

>

> net.isr.direct=1

> net.isr.direct_force=0

 

оно?

Edited by zlolotus

Share this post


Link to post
Share on other sites

> /boot/loader.conf:

> net.isr.bindthreads=4

> net.isr.maxthreads=1

> net.isr.defaultqlimit=4096

 

> /etc/sysctl.conf:

>

> net.isr.direct=1

> net.isr.direct_force=0

 

оно?

 

Ну наверное, пока стоит так:

 

# sysctl -a | grep isr
net.isr.numthreads: 1
net.isr.defaultqlimit: 256
net.isr.maxqlimit: 10240
net.isr.bindthreads: 0
net.isr.maxthreads: 1
net.isr.direct: 1
net.isr.direct_force: 1
net.route.netisr_maxqlen: 256

Share this post


Link to post
Share on other sites

Ага, посмотрел здесь http://goo.gl/1hjDN, почитал тут /usr/src/sys/net/netisr.c:

 

/*-

* Three direct dispatch policies are supported:

*

* - Always defer: all work is scheduled for a netisr, regardless of context.

* (!direct)

*

* - Hybrid: if the executing context allows direct dispatch, and we're

* running on the CPU the work would be done on, then direct dispatch if it

* wouldn't violate ordering constraints on the workstream.

* (direct && !direct_force)

*

* - Always direct: if the executing context allows direct dispatch, always

* direct dispatch. (direct && direct_force)

*

* Notice that changing the global policy could lead to short periods of

* misordered processing, but this is considered acceptable as compared to

* the complexity of enforcing ordering during policy changes.

*/

 

Ставим значит:

net.isr.direct: 0
net.isr.direct_force: 0

 

/*

* Allow the administrator to limit the number of threads (CPUs) to use for

* netisr. We don't check netisr_maxthreads before creating the thread for

* CPU 0, so in practice we ignore values <= 1. This must be set at boot.

* We will create at most one thread per CPU.

*/

 

Ставим:

net.isr.maxthreads: 16

 

И в ребут, либо не ставим maxthreads и используются все ядра (судя по тексту).

 

Отпишу по результатам.

Share this post


Link to post
Share on other sites

Бёрд - один из тех кто фрю пилит. Не стоит игнорировать его советы.

Share this post


Link to post
Share on other sites

При

 

net.isr.direct: 0
net.isr.direct_force: 0

 

или

 

net.isr.direct: 1
net.isr.direct_force: 0

 

Уже на

 

            input        (Total)           output
  packets  errs idrops      bytes    packets  errs      bytes colls
     224K     0     0        88M       224K     0        88M     0
     225K     0     0        89M       225K     0        89M     0
     220K     0     0        86M       221K     0        86M     0
     222K     0     0        92M       223K     0        92M     0
     221K     0     0        87M       222K     0        86M     0
     223K     0     0        87M       224K     0        87M     0

 

Пинги превышали 100мс и терялись. При прежних:

 

net.isr.direct: 1
net.isr.direct_force: 1

           input        (Total)           output
  packets  errs idrops      bytes    packets  errs      bytes colls
     375K     0     0        88M       377K     0        88M     0
     416K     0     0        89M       418K     0        89M     0
     397K     0     0        84M       399K     0        83M     0
     404K     0     0        81M       406K     0        78M     0
     349K     0     0        82M       351K     0        80M     0

Полет нормальный, правда CPU на 90% забиваются:

 

last pid: 98706;  load averages:  0.85,  0.52,  0.34             up 3+21:13:13  17:15:25
357 processes: 34 running, 237 sleeping, 1 zombie, 85 waiting
CPU 0:   0.0% user,  0.0% nice,  0.0% system, 88.7% interrupt, 11.3% idle
CPU 1:   0.0% user,  0.0% nice,  0.0% system, 89.1% interrupt, 10.9% idle
CPU 2:   0.0% user,  0.0% nice,  0.0% system, 86.1% interrupt, 13.9% idle
CPU 3:   0.0% user,  0.0% nice,  0.0% system, 81.2% interrupt, 18.8% idle
CPU 4:   0.0% user,  0.0% nice,  0.0% system, 87.2% interrupt, 12.8% idle
CPU 5:   0.0% user,  0.0% nice,  0.0% system, 85.0% interrupt, 15.0% idle
CPU 6:   0.0% user,  0.0% nice,  0.0% system, 89.8% interrupt, 10.2% idle
CPU 7:   0.0% user,  0.0% nice,  0.0% system, 90.2% interrupt,  9.8% idle
CPU 8:   0.0% user,  0.0% nice,  0.4% system,  0.0% interrupt, 99.6% idle
CPU 9:   0.0% user,  0.0% nice,  6.8% system,  0.0% interrupt, 93.2% idle
CPU 10:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 11:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 12:  4.5% user,  0.0% nice,  7.1% system,  0.0% interrupt, 88.3% idle
CPU 13:  7.1% user,  0.0% nice,  3.8% system,  0.0% interrupt, 89.1% idle
CPU 14: 19.5% user,  0.0% nice,  7.1% system,  0.0% interrupt, 73.4% idle
CPU 15:  2.3% user,  0.0% nice,  3.8% system,  0.0% interrupt, 94.0% idle
CPU 16: 12.0% user,  0.0% nice,  8.3% system,  0.0% interrupt, 79.7% idle
CPU 17: 16.2% user,  0.0% nice,  3.8% system,  0.0% interrupt, 80.1% idle
CPU 18:  2.6% user,  0.0% nice,  6.4% system,  0.0% interrupt, 91.0% idle
CPU 19:  0.0% user,  0.0% nice,  1.1% system,  0.0% interrupt, 98.9% idle
CPU 20:  0.0% user,  0.0% nice,  4.1% system,  0.0% interrupt, 95.9% idle
CPU 21:  0.0% user,  0.0% nice,  1.1% system,  0.0% interrupt, 98.9% idle
CPU 22:  0.0% user,  0.0% nice,  2.6% system,  0.0% interrupt, 97.4% idle
CPU 23:  0.0% user,  0.0% nice,  1.5% system,  0.0% interrupt, 98.5% idle
Mem: 297M Active, 290M Inact, 1282M Wired, 2940K Cache, 827M Buf, 6029M Free
Swap: 4096M Total, 4096M Free

 PID USERNAME   THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
  11 root        24 171 ki31     0K   384K RUN     0 2062.0 1635.94% idle
  12 root        93 -68    -     0K  1488K RUN     7 111.3H 696.09% intr
1727 nobody       1 111    0   191M   187M CPU13  13 855:02 69.97% softflowd
   0 root        56 -68    0     0K   880K -      12  29.2H 58.15% kernel

 

Проверял hping3 --flood.

Share this post


Link to post
Share on other sites

Попробуйте так:

net.isr.bindthreads=24

net.isr.maxthreads=1

net.isr.direct: 0

net.isr.direct_force: 0

Share this post


Link to post
Share on other sites
softflowd

хватит насиловать тазик.

 

Ну странно, тушим softflowd, видим:

 

last pid:  8251;  load averages:  4.93,  4.90,  3.34                     up 6+21:08:44  17:10:56
357 processes: 27 running, 238 sleeping, 1 zombie, 91 waiting
CPU:  0.0% user,  0.0% nice, 25.1% system,  7.5% interrupt, 67.5% idle
Mem: 181M Active, 540M Inact, 1012M Wired, 2932K Cache, 827M Buf, 6165M Free
Swap: 4096M Total, 4096M Free

 PID USERNAME   THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
  11 root        24 171 ki31     0K   384K CPU0    0 3658.8 1698.29% idle
   0 root        56 -68    0     0K   880K -      14  46.7H 622.22% kernel
  12 root        93 -68    -     0K  1488K WAIT    7 199.4H 179.44% intr

 

            input        (Total)           output
  packets  errs idrops      bytes    packets  errs      bytes colls
     192K     0     0       143M       193K     0       167M     0
     192K     0     0       143M       193K     0       163M     0
     193K     0     0       144M       193K     0       162M     0
     192K     0     0       143M       193K     0       161M     0
     191K     0     0       144M       192K     0       160M     0
     192K     0     0       144M       192K     0       163M     0
     194K     0     0       144M       195K     0       165M     0
     197K     0     0       146M       197K     0       161M     0
     192K     0     0       143M       192K     0       167M     0
     194K     0     0       144M       195K     0       170M     0

 

# top -SH -n 1000 | grep -v '0.00%\|idle'
last pid:  8331;  load averages:  4.88,  4.87,  3.50  up 6+21:10:30    17:12:42
354 processes: 35 running, 228 sleeping, 1 zombie, 90 waiting

Mem: 181M Active, 541M Inact, 1012M Wired, 2932K Cache, 827M Buf, 6164M Free
Swap: 4096M Total, 4096M Free


 PID USERNAME  PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
   0 root      -68    0     0K   880K CPU14  14 240:31 88.09% {igb5 que}
   0 root      -68    0     0K   880K CPU11  11 236:31 81.59% {igb5 que}
   0 root      -68    0     0K   880K CPU18  18 227:08 79.79% {igb5 que}
   0 root      -68    0     0K   880K CPU23  23 236:49 76.66% {igb5 que}
   0 root      -68    0     0K   880K CPU21  21 225:27 74.17% {igb5 que}
   0 root      -68    0     0K   880K CPU19  19 231:12 72.17% {igb5 que}
   0 root      -68    0     0K   880K CPU16  16 227:30 69.09% {igb5 que}
   0 root      -68    0     0K   880K -       8 223:04 62.70% {igb5 que}
  12 root      -68    -     0K  1488K CPU6    6  17.3H 25.59% {irq298: igb4:que}
  12 root      -68    -     0K  1488K WAIT    5  17.3H 21.88% {irq297: igb4:que}
  12 root      -68    -     0K  1488K CPU0    0  17.6H 21.00% {irq292: igb4:que}
  12 root      -68    -     0K  1488K WAIT    3  17.7H 20.75% {irq295: igb4:que}
  12 root      -68    -     0K  1488K CPU4    4  16.9H 20.36% {irq296: igb4:que}
  12 root      -68    -     0K  1488K WAIT    2  17.2H 19.78% {irq294: igb4:que}
  12 root      -68    -     0K  1488K WAIT    1  16.9H 19.19% {irq293: igb4:que}
  12 root      -68    -     0K  1488K WAIT    7  17.6H 18.99% {irq299: igb4:que}
   0 root      -68    0     0K   880K -      12 931:45  7.96% {dummynet}
  12 root      -68    -     0K  1488K WAIT    7 445:42  3.47% {irq308: igb5:que}
  12 root      -68    -     0K  1488K WAIT    6 454:03  3.17% {irq307: igb5:que}
  12 root      -68    -     0K  1488K WAIT    1 463:39  2.20% {irq302: igb5:que}
  12 root      -68    -     0K  1488K WAIT    0 461:19  1.66% {irq301: igb5:que}
  12 root      -68    -     0K  1488K WAIT    3 457:43  1.46% {irq304: igb5:que}
  12 root      -68    -     0K  1488K WAIT    4 448:57  1.17% {irq305: igb5:que}
  12 root      -68    -     0K  1488K WAIT    5 442:49  1.17% {irq306: igb5:que}
  12 root      -68    -     0K  1488K WAIT    2 472:30  0.59% {irq303: igb5:que}
   0 root      -68    0     0K   880K -      15  33:07  0.10% {igb4 que}
   0 root      -68    0     0K   880K -       9  32:32  0.10% {igb4 que}

 

Вроде не совсем с softflow проблема связана.

Share this post


Link to post
Share on other sites

Проблемное место определил: кол-во правил ipfw.

Схема, просьба не ругать и не смеяться, pipe per user > 1k, в силу того, что клиенты желают на 1н тариф вешать удаленные точки с разными подсетями, pipe tablearg не прет.

 

Убираем pipe'ы:

 

last pid: 36780;  load averages:  0.92,  2.47,  1.83                                            up 16+21:05:45  17:07:57
350 processes: 27 running, 231 sleeping, 1 zombie, 91 waiting
CPU 0:   0.0% user,  0.0% nice,  0.0% system, 24.4% interrupt, 75.6% idle
CPU 1:   0.0% user,  0.0% nice,  0.0% system, 17.3% interrupt, 82.7% idle
CPU 2:   0.0% user,  0.0% nice,  0.0% system, 19.5% interrupt, 80.5% idle
CPU 3:   0.0% user,  0.0% nice,  0.0% system, 18.4% interrupt, 81.6% idle
CPU 4:   0.0% user,  0.0% nice,  0.0% system, 19.5% interrupt, 80.5% idle
CPU 5:   0.0% user,  0.0% nice,  0.0% system, 24.1% interrupt, 75.9% idle
CPU 6:   0.0% user,  0.0% nice,  0.0% system, 21.8% interrupt, 78.2% idle
CPU 7:   0.0% user,  0.0% nice,  0.0% system, 23.3% interrupt, 76.7% idle
CPU 8:   0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 9:   0.0% user,  0.0% nice,  0.4% system,  0.0% interrupt, 99.6% idle
CPU 10:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 11:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 12:  0.0% user,  0.0% nice,  1.9% system,  0.0% interrupt, 98.1% idle
CPU 13:  0.0% user,  0.0% nice,  1.5% system,  0.0% interrupt, 98.5% idle
CPU 14:  0.0% user,  0.0% nice,  1.9% system,  0.0% interrupt, 98.1% idle
CPU 15:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 16:  0.0% user,  0.0% nice,  1.9% system,  0.0% interrupt, 98.1% idle
CPU 17:  0.0% user,  0.0% nice,  1.5% system,  0.0% interrupt, 98.5% idle
CPU 18:  6.8% user,  0.0% nice,  0.4% system,  0.0% interrupt, 92.9% idle
CPU 19:  9.8% user,  0.0% nice,  1.1% system,  0.0% interrupt, 89.1% idle
CPU 20:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 21:  0.0% user,  0.0% nice,  0.8% system,  0.0% interrupt, 99.2% idle
CPU 22:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 23:  0.0% user,  0.0% nice,  0.4% system,  0.0% interrupt, 99.6% idle
Mem: 244M Active, 1047M Inact, 1025M Wired, 2928K Cache, 827M Buf, 5582M Free
Swap: 4096M Total, 4096M Free

 PID USERNAME   THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
  11 root        24 171 ki31     0K   384K RUN     0 8911.2 2251.51% idle
  12 root        93 -68    -     0K  1488K WAIT    7 532.7H 167.97% intr
36746 nobody       1  56    0 76684K 71344K select 19   0:15 20.26% softflowd
   0 root        56 -68    0     0K   880K -      12  98.7H  3.42% kernel

Share this post


Link to post
Share on other sites

в силу того, что клиенты желают на 1н тариф вешать удаленные точки с разными подсетями, pipe tablearg не прет.

а почему не прет?

Share this post


Link to post
Share on other sites

в силу того, что клиенты желают на 1н тариф вешать удаленные точки с разными подсетями, pipe tablearg не прет.

а почему не прет?

 

Ну например:

 

pipe 936 config bw 2048k queue 768k
add 936 pipe 936 ip from any to 192.168.10.0/29,192.168.20.0/30,1.1.1.0/29 out xmit lan

Я не знаю как tablearg распидалить, чтобы в 1ну трубу засунуть подсети разной маски.

Share this post


Link to post
Share on other sites

в силу того, что клиенты желают на 1н тариф вешать удаленные точки с разными подсетями, pipe tablearg не прет.

а почему не прет?

 

Ну например:

 

pipe 936 config bw 2048k queue 768k
add 936 pipe 936 ip from any to 192.168.10.0/29,192.168.20.0/30,1.1.1.0/29 out xmit lan

Я не знаю как tablearg распидалить, чтобы в 1ну трубу засунуть подсети разной маски.

ipfw table X add 192.168.10.0/29 936

ipfw table X add 192.168.20.0/30 936

ipfw table X add 1.1.1.0/29 936

...

pipe tablearg all from any to 'table(X)'

Edited by boco

Share this post


Link to post
Share on other sites

ipfw table X add 192.168.10.0/29 936

ipfw table X add 192.168.20.0/30 936

ipfw table X add 1.1.1.0/29 936

...

pipe tablearg all from any to 'table(X)'

 

Я так понимаю, для каждого хоста из таблы будет динамическая труба с заданной скоростью, а надо, чтобы всем трем, в данном примере, полоса не более заданной ширины.

Или я неправильно понимаю?

Share this post


Link to post
Share on other sites

Не правильно понимаете, зависит от настройки пайпа, если пайп без mask dst-ip то все три будут попадать в один пайп, если укажете маску то каждый в динамический пайп.

Share this post


Link to post
Share on other sites

Не правильно понимаете, зависит от настройки пайпа, если пайп без mask dst-ip то все три будут попадать в один пайп, если укажете маску то каждый в динамический пайп.

 

По сути кол-во pipe не изменится, но все строки типа

 

add 936 pipe 936 ip from any to 192.168.10.0/29,192.168.20.0/30,1.1.1.0/29 out xmit lan

 

заменятся 1ой

 

add 1000 pipe tablearg ip from any to table(128)

 

Надо пробувать ...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this