Jump to content
Калькуляторы

Здравствуйте!

Нужна помощь вот по какому вопросу. С нуля установил FreeBSD 10.1-RELEASE, на сервер, который является шлюзом для нескольких абонентских Vlan, после сопутствующих доводок установил из портов isc-dhcp41-server-4.1.e_9,2.

Dhcpd запускается, конфиг подцепляется без проблем, абоненты свои IP получают, все вроде хорошо. Но, в консоль стали сыпаться сообщения:

Dec 26 11:39:32 border dhcpd: send_packet: No buffer space available
Dec 26 11:39:32 border dhcpd: dhcp.c:3222: Failed to send 300 byte long packet over fallback interface.

Я пол Инета излазил в поисках проблемы, ну нигде нет решения, а если и были наводящие подсказки, то присматривался, выполнял рекомендации - ничего не помогло.

Дело в том, что до этого еще 2 таких сервера успешно работали с идентичным конфигом, только версия FreeBSD 9.2-RELEASE-p3 на них установлена и сервер isc-dhcp41-server-4.1.e_7,2, но не думаю, что это сильно принципиально. Работали отлично, подобных сообщений не было. Sysctl'и подкручены одинаково.

В vmstat failures только

ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP


8 Bucket:                64,      0,      40,    3618,  186523,  11,   0
32 Bucket:              256,      0,     724,     701,    5381,  51,   0
64 Bucket:              512,      0,     324,     636,   40530, 821,   0
256 Bucket:            2048,      0,     474,     196, 2419455,   5,   0
vmem btag:               56,      0,   23298,    2901,   26493, 187,   0

, но это везде так, кстати тоже было бы неплохо узнать как от них избавиться...

 

netstat -m
91549/9146/100695 mbufs in use (current/cache/total)
91539/4877/96416/524288 mbuf clusters in use (current/cache/total/max)
91539/4854 mbuf+clusters out of packet secondary zone in use (current/cache)
0/18/18/253759 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/75188 9k jumbo clusters in use (current/cache/total/max)
0/0/0/42293 16k jumbo clusters in use (current/cache/total/max)
206309K/12112K/218421K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile

 

Подскажите пожалуйста почему эти сообщения возникают? Клиенты не жалуются на получение IP адресов. Если нужна доп. информация, только скажите.

Share this post


Link to post
Share on other sites

# sysctl kern.ipc.maxmbufmem
kern.ipc.maxmbufmem: 4157601792
# sysctl kern.ipc.nmbufs
kern.ipc.nmbufs: 3248130
# sysctl kern.ipc.nmbclusters 
kern.ipc.nmbclusters: 524288
# sysctl kern.ipc.maxsockbuf
kern.ipc.maxsockbuf: 83886080

Edited by NeXuSs

Share this post


Link to post
Share on other sites

Проблема в юзерспейсе - dhcpd не хватает памяти.

Делайте анализ тюнинга и поднимайте mbufs, чтоб mbufs total >> mbufs current

91549/9146/100695 mbufs in use (current/cache/total)

 

Предполагаю, что надо в /boot/loader.conf приподнять kern.ipc.nmbclusters до 500000 или 600000

Тюнинг делайте сначала на виртуалке, чтоб не получить из боевой машины тыкву.

Share this post


Link to post
Share on other sites

Благодарю за ответ.

В данный момент у меня так:

 
# sysctl kern.ipc.nmbclusters
kern.ipc.nmbclusters: 524288

91414/6782/98196/524288 mbuf clusters in use (current/cache/total/max)

Вроде бы до максимума еще далеко. Думаете необходимо еще выше поднять?

 

кусок vmstat -z

ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP

UMA Kegs:               384,      0,     116,       4,     116,   0,   0
UMA Zones:             2176,      0,     116,       0,     116,   0,   0
UMA Slabs:               80,      0,   12051,      49,   12374,   0,   0
UMA RCntSlabs:           88,      0,   49175,      10,   49175,   0,   0
UMA Hash:               256,      0,       2,      13,      10,   0,   0
4 Bucket:                32,      0,     208,    4167,   63446,   0,   0
6 Bucket:                48,      0,      44,    2529,    1829,   0,   0
8 Bucket:                64,      0,      11,    3709,  261591,  11,   0
12 Bucket:               96,      0,     241,    1932,  250783,   0,   0
16 Bucket:              128,      0,     241,    2363,  274572,   0,   0
32 Bucket:              256,      0,    1843,     752,  255130,  51,   0
64 Bucket:              512,      0,     383,    9721,   82388,4047,   0
128 Bucket:            1024,      0,     299,     545, 1230308,   0,   0
256 Bucket:            2048,      0,    1086,    2744,23868605,   5,   0
vmem btag:               56,      0,   42753,   10994,   56698, 381,   0
VM OBJECT:              256,      0,  168303,    1467, 6126047,   0,   0
RADIX NODE:             144,      0,   67535,    1963,10218765,   0,   0
MAP:                    240,      0,       3,      61,       3,   0,   0
KMAP ENTRY:             128,      0,       8,     519,       8,   0,   0
MAP ENTRY:              128,      0,   50400,    5493,16437464,   0,   0
VMSPACE:                448,      0,      50,     508,  361830,   0,   0
fakepg:                 104,      0,       0,       0,       0,   0,   0
mt_zone:               4112,      0,     363,       0,     363,   0,   0
16:                      16,      0,    5000,    4036, 1003978,   0,   0
32:                      32,      0,  572708,    6167,29031079,   0,   0
64:                      64,      0,   69807,    4345,33536399,   0,   0
128:                    128,      0,  232932,  157513,82567602,   0,   0
256:                    256,      0,   61136,    5599,5746332513,   0,   0
512:                    512,      0,    3499,   13613,  397580,   0,   0
1024:                  1024,      0,      80,     388,  513829,   0,   0
2048:                  2048,      0,     176,     496, 1330302,   0,   0
4096:                  4096,      0,     432,      96,  363627,   0,   0
8192:                  8192,      0,      50,      12,     280,   0,   0
16384:                16384,      0,      66,      10,    1173,   0,   0
32768:                32768,      0,      10,      20,    1925,   0,   0
65536:                65536,      0,      56,      17,     864,   0,   0
SLEEPQUEUE:              80,      0,     523,    1120,     523,   0,   0
64 pcpu:                  8,      0,  524463,    1617,  524463,   0,   0
Files:                   80,      0,     256,    2144, 4689135,   0,   0
rl_entry:                40,      0,     229,    2871,     229,   0,   0
TURNSTILE:              136,      0,     523,     397,     523,   0,   0
umtx pi:                 96,      0,       0,       0,       0,   0,   0
MAC labels:              40,      0,       0,       0,       0,   0,   0
PROC:                  1216,      0,      68,     166,  361855,   0,   0
THREAD:                1168,      0,     415,     107,    1355,   0,   0
cpuset:                  72,      0,     188,    1462,     283,   0,   0
audit_record:          1248,      0,       0,       0,       0,   0,   0
mbuf_packet:            256, 3248130,   91327,    6837,3300584791,   0,   0
mbuf:                   256, 3248130,     188,    5643,3271854678,   0,   0
mbuf_cluster:          2048, 524288,   98164,      32,   98164,   0,   0
mbuf_jumbo_page:       4096, 253759,       0,      77,   22856,   0,   0
mbuf_jumbo_9k:         9216,  75188,       0,       0,       0,   0,   0
mbuf_jumbo_16k:       16384,  42293,       0,       0,       0,   0,   0
mbuf_ext_refcnt:          4,      0,       0,       0,       0,   0,   0

 

Да и здесь тоже не максимум

91616/12379/103995 mbufs in use (current/cache/total)

Конечно, можно увеличить предельное значение total. Какой переменной увеличивается именно это значение (103995) ?

 

Да, забыл упомянуть, что через сервер идет трафик в максимуме до 850Mbit/s, конечно не постоянно он такой, бывает снижается до 150Mbit/s, но это никак не влияет на количество и интенсивность появления тех сообщений от dhcpd.

 

Еще, на всякий случай, выложу sysctl.conf, то, что подкручено:

dev.ix.0.enable_aim=0
dev.ix.1.enable_aim=0
dev.ix.0.fc=0
dev.ix.1.fc=0
dev.igb.0.enable_aim=0
dev.igb.1.enable_aim=0
dev.igb.0.fc=0
dev.igb.1.fc=0
dev.em.0.fc=0
dev.em.1.fc=0

net.inet.ip.forwarding=1
net.inet.ip.fastforwarding=1

net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216

net.inet.ip.ttl=128

kern.ipc.nmbclusters=524288
kern.ipc.maxsockbuf=83886080

net.inet.ip.fw.one_pass=0
net.inet.ip.fw.dyn_max=65535
net.inet.ip.fw.dyn_buckets=2048
net.inet.ip.fw.dyn_syn_lifetime=10
net.inet.ip.fw.dyn_ack_lifetime=120
net.inet.ip.fw.verbose=1

net.inet.ip.dummynet.io_fast=1
net.inet.ip.dummynet.hash_size=65536
net.inet.ip.dummynet.pipe_slot_limit=4096
net.inet.ip.dummynet.pipe_byte_limit=1048576

 

loader.conf

geom_mirror_load="YES"

net.graph.maxdata=65536
net.graph.maxalloc=65536

# Em
hw.em.rxd=4096
hw.em.txd=4096
hw.em.max_interrupt_rate=32000

# Igb
hw.igb.num_queues=5
hw.igb.lro=0
hw.igb.rxd=4096
hw.igb.txd=4096
hw.igb.max_interrupt_rate=32000

# ix
hw.ix.num_queues=5
hw.ix.lro=0
hw.ix.rxd=4096
hw.ix.txd=4096
hw.ix.rx_process_limit=4096

hw.intr_storm_threshold=19000

net.link.ifqmaxlen=10240
net.inet.tcp.syncache.hashsize=1024
net.inet.tcp.syncache.bucketlimit=100
net.inet.tcp.tcbhashsize=4096

Edited by NeXuSs

Share this post


Link to post
Share on other sites

Извиняюсь!

FreeBSD border 10.1-RELEASE FreeBSD 10.1-RELEASE #0: Thu Dec  4 12:41:14 EET 2014     root@border:/usr/obj/usr/src/sys/BORDER  amd64

real memory  = 8589934592 (8192 MB)
avail memory = 8261046272 (7878 MB)

# sysctl net.inet.raw.maxdgram
net.inet.raw.maxdgram: 9216
# sysctl net.inet.raw.recvspace
net.inet.raw.recvspace: 9216

попробую изменить значения на 16384

Share this post


Link to post
Share on other sites

Нет, все-равно проскочило

Dec 27 22:36:19 border dhcpd: send_packet: No buffer space available
Dec 27 22:36:19 border dhcpd: dhcp.c:1305: Failed to send 300 byte long packet over fallback interface.

:(

Но, вроде как код не dhcp.c:3222, а dhcp.c:1305.

 

Но как-то странно, реже подобные сообщения бегут что-ли, то есть, много реже, уже 10 минут прошло с момента последнего, а новых нет.

Edited by NeXuSs

Share this post


Link to post
Share on other sites

net.inet.udp.maxdgram=36864

Продолжим вот этим, я конкретно не знаю какой буфер у вас не вмещает пакеты. Это может быть даже буфер сетевого адаптера, правда драйвера сетевых адаптеров обычно сами дополнительно в dmesg срут. Поэтому надо искать какой из буферов не даётся :) Предыдущие sysctl можете тоже поднять.

Share this post


Link to post
Share on other sites

вот что заметил в messages еще, хотя раньше не было:

Dec 28 20:45:43 border kernel: sonewconn: pcb 0xfffff80042493188: Listen queue overflow: 16 already in queue awaiting acceptance (452 occurrences)

меняю на 36864

# sysctl net.inet.udp.maxdgram
net.inet.udp.maxdgram: 9216

старые сообщения так же были замечены в messages

 

Ну и классический вопрос: вам нужен именно 4.1 или есть возможность обновится до свежей 4.3 ветки?

Да нет, не принципиально, в Понедельник попробую обновиться.

 

Спасибо за помощь!

 

отключите фаервол =)

:)

Edited by NeXuSs

Share this post


Link to post
Share on other sites

И еще, если через сервер идет много трафика и при этом на нем работает dhcpd, есть смысл попробовать включить опцию sysctl net.bpf.optimize_writers, она вряд ли повлияет на сам dhcpd в лучшую сторону, но улучшит прохождение трафика за счет избавления от части блокировок связанных с bpf.

Ну и самое распоследнее - есть еще крутилка net.bpf.bufsize: (4096 по-умолчанию), это мне кажется крайний вариант, но попробовать тоже можно.

В принципе dhcpd шлет не один раз свои сообщения, поэтому клиенты вряд ли заметят что-то вообще. Думаю сама проблема связана с тем, что dhcpd использует для отправки сообщений устройство BPF. Оно само по себе не простое и редко используемое. Поэтому практики по этой части просто мало. На BPF кстати вполне легко может влиять шейпер, если он есть. Но думаю трафик от самого сервера до пользователей вы же не шейпите?

Share this post


Link to post
Share on other sites

netstat -s

netstat -Q

netstat -m

 

в чём сложность посмотреть?)

Вот, пожалуйста:

# netstat -Q
Configuration:
Setting                        Current        Limit
Thread count                         1            1
Default queue limit                256        10240
Dispatch policy                 direct          n/a
Threads bound to CPUs         disabled          n/a

Protocols:
Name   Proto QLimit Policy Dispatch Flags
ip         1    256   flow  default   ---
igmp       2    256 source  default   ---
rtsock     3    256 source  default   ---
arp        7    256 source  default   ---
ether      9    256 source   direct   ---
ip6       10    256   flow  default   ---

Workstreams:
WSID CPU   Name     Len WMark   Disp'd  HDisp'd   QDrops   Queued  Handled
  0   0   ip         0    10 2359699717        0        0   345210 2360044877
  0   0   igmp       0     0        0        0        0        0        0
  0   0   rtsock     0    39        0        0        0 46642628 46642628
  0   0   arp        0     0     1450        0        0        0     1450
  0   0   ether      0     0  3060555        0        0        0  3060555
  0   0   ip6        0     0        0        0        0        0        0

# netstat -m
90824/13921/104745 mbufs in use (current/cache/total)
90675/7521/98196/524288 mbuf clusters in use (current/cache/total/max)
90675/7489 mbuf+clusters out of packet secondary zone in use (current/cache)
0/81/81/253759 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/75188 9k jumbo clusters in use (current/cache/total/max)
0/0/0/42293 16k jumbo clusters in use (current/cache/total/max)
204205K/18846K/223051K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile

# netstat -s
tcp:
       9220210 packets sent
               5238959 data packets (3862069377 bytes)
               2485 data packets (1745367 bytes) retransmitted
               460 data packets unnecessarily retransmitted
               1 resend initiated by MTU discovery
               3598386 ack-only packets (1760262 delayed)
               0 URG only packets
               0 window probe packets
               2050 window update packets
               378403 control packets
       10854788 packets received
               5688323 acks (for 3860716228 bytes)
               195722 duplicate acks
               24 acks for unsent data
               6192845 packets (4227419474 bytes) received in-sequence
               1236 completely duplicate packets (242366 bytes)
               9 old duplicate packets
               61 packets with some dup. data (27671 bytes duped)
               2162 out-of-order packets (2707633 bytes)
               409 packets (451503 bytes) of data after window
               0 window probes
               1663 window update packets
               1672 packets received after close
               69 discarded for bad checksums
               0 discarded for bad header offset fields
               0 discarded because packet too short
               0 discarded due to memory problems
       129263 connection requests
       127502 connection accepts
       6 bad connection attempts
       258364 listen queue overflows
       23388 ignored RSTs in the windows
       255545 connections established (including accepts)
       258482 connections closed (including 18886 drops)
               16237 connections updated cached RTT on close
               16428 connections updated cached RTT variance on close
               2236 connections updated cached ssthresh on close
       15 embryonic connections dropped
       5589360 segments updated rtt (of 5511614 attempts)
       9862 retransmit timeouts
               462 connections dropped by rexmit timeout
       0 persist timeouts
               0 connections dropped by persist timeout
       0 Connections (fin_wait_2) dropped because of timeout
       764 keepalive timeouts
               683 keepalive probes sent
               81 connections dropped by keepalive
       3317678 correct ACK header predictions
       4360246 correct data packet header predictions
       160848 syncache entries added
               1489 retransmitted
               4913 dupsyn
               0 dropped
               127502 completed
               0 bucket overflow
               0 cache overflow
               705 reset
               291 stale
               258364 aborted
               0 badack
               0 unreach
               0 zone failures
       160848 cookies sent
       226039 cookies received
       236 hostcache entries added
               0 bucket overflow
       56 SACK recovery episodes
       108 segment rexmits in SACK recovery episodes
       143708 byte rexmits in SACK recovery episodes
       2485 SACK options (SACK blocks) received
       809 SACK options (SACK blocks) sent
       0 SACK scoreboard overflow
       0 packets with ECN CE bit set
       0 packets with ECN ECT(0) bit set
       0 packets with ECN ECT(1) bit set
       0 successful ECN handshakes
       0 times ECN reduced the congestion window
       0 packets with valid tcp-md5 signature received
       0 packets with invalid tcp-md5 signature received
       0 packets with tcp-md5 signature mismatch
       0 packets with unexpected tcp-md5 signature received
       0 packets without expected tcp-md5 signature received
udp:
       965355 datagrams received
       0 with incomplete header
       0 with bad data length field
       36 with bad checksum
       550 with no checksum
       85795 dropped due to no socket
       201655 broadcast/multicast datagrams undelivered
       109 dropped due to full socket buffers
       0 not for hashed pcb
       677760 delivered
       537180 datagrams output
       0 times multicast source filter matched
sctp:
       2 input packets
               2 datagrams
               0 packets that had data
               0 input SACK chunks
               0 input DATA chunks
               0 duplicate DATA chunks
               0 input HB chunks
               0 HB-ACK chunks
               0 input ECNE chunks
               0 input AUTH chunks
               0 chunks missing AUTH
               0 invalid HMAC ids received
               0 invalid secret ids received
               0 auth failed
               0 fast path receives all one chunk
               0 fast path multi-part data
       2 output packets
               0 output SACKs
               0 output DATA chunks
               0 retransmitted DATA chunks
               0 fast retransmitted DATA chunks
               0 FR's that happened more than once to same chunk
               0 output HB chunks
               0 output ECNE chunks
               0 output AUTH chunks
               0 ip_output error counter
       Packet drop statistics:
               0 from middle box
               0 from end host
               0 with data
               0 non-data, non-endhost
               0 non-endhost, bandwidth rep only
               0 not enough for chunk header
               0 not enough data to confirm
               0 where process_chunk_drop said break
               0 failed to find TSN
               0 attempt reverse TSN lookup
               0 e-host confirms zero-rwnd
               0 midbox confirms no space
               0 data did not match TSN
               0 TSN's marked for Fast Retran
       Timeouts:
               0 iterator timers fired
               0 T3 data time outs
               0 window probe (T3) timers fired
               0 INIT timers fired
               0 sack timers fired
               0 shutdown timers fired
               0 heartbeat timers fired
               0 a cookie timeout fired
               0 an endpoint changed its cookiesecret
               0 PMTU timers fired
               0 shutdown ack timers fired
               0 shutdown guard timers fired
               0 stream reset timers fired
               0 early FR timers fired
               0 an asconf timer fired
               0 auto close timer fired
               0 asoc free timers expired
               0 inp free timers expired
       0 packet shorter than header
       0 checksum error
       2 no endpoint for port
       0 bad v-tag
       0 bad SID
       0 no memory
       0 number of multiple FR in a RTT window
       0 RFC813 allowed sending
       0 RFC813 does not allow sending
       0 times max burst prohibited sending
       0 look ahead tells us no memory in interface
       0 numbers of window probes sent
       0 times an output error to clamp down on next user send
       0 times sctp_senderrors were caused from a user
       0 number of in data drops due to chunk limit reached
       0 number of in data drops due to rwnd limit reached
       0 times a ECN reduced the cwnd
       0 used express lookup via vtag
       0 collision in express lookup
       0 times the sender ran dry of user data on primary
       0 same for above
       0 sacks the slow way
       0 window update only sacks sent
       0 sends with sinfo_flags !=0
       0 unordered sends
       0 sends with EOF flag set
       0 sends with ABORT flag set
       0 times protocol drain called
       0 times we did a protocol drain
       0 times recv was called with peek
       0 cached chunks used
       0 cached stream oq's used
       0 unread messages abandonded by close
       0 send burst avoidance, already max burst inflight to net
       0 send cwnd full avoidance, already max burst inflight to net
       0 number of map array over-runs via fwd-tsn's
ip:
       21741936762 total packets received
       1090 bad header checksums
       0 with size smaller than minimum
       38 with data size < data length
       0 with ip length > max ip packet size
       0 with header length < data size
       0 with data length < header length
       0 with bad options
       0 with incorrect version number
       451 fragments received
       0 fragments dropped (dup or out of space)
       451 fragments dropped after timeout
       0 packets reassembled ok
       12149095 packets for this host
       22641 packets for unknown/unsupported protocol
       10937287602 packets forwarded (8579422172 packets fast forwarded)
       10453281 packets not forwardable
       0 packets received for unknown multicast group
       0 redirects sent
       23952740 packets sent from this host
       3 packets sent with fabricated ip header
       8416199 output packets dropped due to no bufs, etc.
       10412786 output packets discarded due to no route
       0 output datagrams fragmented
       0 fragments created
       0 datagrams that can't be fragmented
       0 tunneling packets that can't find gif
       0 datagrams with bad address in header
icmp:
       13662825 calls to icmp_error
       1771 errors not generated in response to an icmp message
       Output histogram:
               echo reply: 230477
               destination unreachable: 13511547
               time exceeded: 148701
       0 messages with bad code fields
       0 messages less than the minimum length
       17 messages with bad checksum
       1 message with bad length
       189 multicast echo requests ignored
       0 multicast timestamp requests ignored
       Input histogram:
               echo reply: 989
               destination unreachable: 21427
               routing redirect: 569
               echo: 305655
               router solicitation: 72
               time exceeded: 265
       230477 message responses generated
       4 invalid return addresses
       11208 no return routes
       ICMP address mask responses are disabled
igmp:
       22347 messages received
       0 messages received with too few bytes
       0 messages received with wrong TTL
       0 messages received with bad checksum
       22347 V1/V2 membership queries received
       0 V3 membership queries received
       0 membership queries received with invalid field(s)
       2668 general queries received
       19679 group queries received
       0 group-source queries received
       0 group-source queries dropped
       0 membership reports received
       0 membership reports received with invalid field(s)
       0 membership reports received for groups to which we belong
       0 V3 reports received without Router Alert
       0 membership reports sent
arp:
       5218366 ARP requests sent
       3082655 ARP replies sent
       3913034 ARP requests received
       98998 ARP replies received
       4012032 ARP packets received
       10398285 total packets dropped due to no ARP entry
       1108376 ARP entrys timed out
       71 Duplicate IPs seen

 

есть смысл попробовать включить опцию sysctl net.bpf.optimize_writers

Поставил в 1

Edited by NeXuSs

Share this post


Link to post
Share on other sites

в dmesg.boot нашел только такие предупреждения:

module_register_init: MOD_LOAD (vesa, 0xffffffff80b135c0, 0) error 19

acpi0: reservation of 0, a0000 (3) failed
acpi0: reservation of 100000, bff00000 (3) failed

pci0: <base peripheral, interrupt controller> at device 16.0 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 16.1 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 17.0 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 17.1 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 20.0 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 20.1 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 20.2 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 20.3 (no driver attached)

Dhcpd слушает Vlan'ы на родительском интервейсе ix0

ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.5.15> port 0xe880-0xe89f mem 0xf8e80000-0xf8efffff,0xf8e7c000-0xf8e7ffff irq 24 at device 0.0 on pci12
ix0: Using MSIX interrupts with 6 vectors
ix0: Ethernet address: 90:e2:ba:7c:1c:40
ix0: PCI Express Bus: Speed 5.0GT/s Width x8

До перехода на этот сервер, подобная конфигурация (dhcpd на вланах ix0) работала на другом сервере, на нем стояла FreeBSD 9.2 и версия isc-dhcp41-server-4.1.e_7,2, подобных проблем не было.

Edited by NeXuSs

Share this post


Link to post
Share on other sites

На BPF кстати вполне легко может влиять шейпер, если он есть. Но думаю трафик от самого сервера до пользователей вы же не шейпите?

В связи с этим добавил в начало рулсета ipfw правило:

allow udp from any to any dst-port 67,68 via vlan*

Посмотрим что получится.

 

UPD.

Спустя полчаса, сообщений

Dec 26 11:39:32 border dhcpd: send_packet: No buffer space available
Dec 26 11:39:32 border dhcpd: dhcp.c:3222: Failed to send 300 byte long packet over fallback interface.

я больше не вижу.

Остались только

kernel: sonewconn: pcb 0xfffff80042493188: Listen queue overflow: 16 already in queue awaiting acceptance (486 occurrences)

netstat -naA | grep fffff80042
Active Internet connections (including servers)
Tcpcb            Proto Recv-Q Send-Q Local Address      Foreign Address    (state)
fffff80042558c00 tcp6       0      0 *.179              *.*                LISTEN

Quagga слушает на 179 порту tcp6, мы не используем tcp6, почему тогда появляются такие сообщения?

Edited by NeXuSs

Share this post


Link to post
Share on other sites

109 dropped due to full socket buffers

 

В лоадер.конф:

kern.ipc.maxpipekva="33554432" # Pipe KVA limit

 

В сисцтл:

kern.ipc.soacceptqueue=4096 # (somaxconn) Maximum listen socket pending connection accept queue size

kern.ipc.maxsockets=262144 # Maximum number of sockets avaliable

kern.ipc.maxsockbuf=33554432 # Do not use lager sockbufs on 8.0+

kern.ipc.nmbjumbop=262144 # Maximum number of mbuf page size jumbo clusters allowed. pagesize(4k/8k)

kern.ipc.nmbclusters=262144 # Maximum number of mbuf clusters allowed // netstat -m

kern.ipc.nmbjumbo9=262144 # Maximum number of mbuf 9k jumbo clusters allowed

kern.ipc.nmbjumbo16=262144 # Maximum number of mbuf 16k jumbo clusters allowed

 

net.inet.udp.recvspace=4194304 # Maximum space for incoming UDP datagrams

net.inet.raw.maxdgram=4194304 # Maximum outgoing raw IP datagram size

net.inet.raw.recvspace=4194304 # Maximum space for incoming raw IP datagrams

 

Если в самом демоне дхцп есть тюнинги сокета то поставить 2мб на отправку и столько же на получение.

Share this post


Link to post
Share on other sites

Спустя полчаса, сообщений

 

Dec 26 11:39:32 border dhcpd: send_packet: No buffer space available

Dec 26 11:39:32 border dhcpd: dhcp.c:3222: Failed to send 300 byte long packet over fallback interface.

 

я больше не вижу.

я ж говорил - отключите фаервол. когда вылазит "No buffer space available", первым делом надо смотреть фаервол.

Share this post


Link to post
Share on other sites

В лоадер.конф:

kern.ipc.maxpipekva="33554432" # Pipe KVA limit

 

В сисцтл:

kern.ipc.soacceptqueue=4096 # (somaxconn) Maximum listen socket pending connection accept queue size

kern.ipc.maxsockets=262144 # Maximum number of sockets avaliable

kern.ipc.maxsockbuf=33554432 # Do not use lager sockbufs on 8.0+

kern.ipc.nmbjumbop=262144 # Maximum number of mbuf page size jumbo clusters allowed. pagesize(4k/8k)

kern.ipc.nmbclusters=262144 # Maximum number of mbuf clusters allowed // netstat -m

kern.ipc.nmbjumbo9=262144 # Maximum number of mbuf 9k jumbo clusters allowed

kern.ipc.nmbjumbo16=262144 # Maximum number of mbuf 16k jumbo clusters allowed

 

net.inet.udp.recvspace=4194304 # Maximum space for incoming UDP datagrams

net.inet.raw.maxdgram=4194304 # Maximum outgoing raw IP datagram size

net.inet.raw.recvspace=4194304 # Maximum space for incoming raw IP datagrams

Некоторые значения переменных у меня бОльше тех, что указали вы, скажите пожалуйста, мне все-равно их изменить в соответствии с вашими указаниями?

# sysctl kern.ipc.maxpipekva
kern.ipc.maxpipekva: 133779456

# sysctl kern.ipc.soacceptqueue
kern.ipc.soacceptqueue: 65535

root@border:/etc # sysctl kern.ipc.maxsockets
kern.ipc.maxsockets: 261290

root@border:/etc # sysctl kern.ipc.maxsockets=262144
kern.ipc.maxsockets: 261290
sysctl: kern.ipc.maxsockets=262144: Invalid argument

root@border:/etc # sysctl kern.ipc.maxsockbuf
kern.ipc.maxsockbuf: 83886080

root@border:/etc # sysctl kern.ipc.nmbjumbop
kern.ipc.nmbjumbop: 253773
root@border:/etc # sysctl kern.ipc.nmbjumbop=262144
kern.ipc.nmbjumbop: 253773 -> 262144

root@border:/etc # sysctl kern.ipc.nmbclusters
kern.ipc.nmbclusters: 524288

root@border:/etc # sysctl kern.ipc.nmbjumbo9
kern.ipc.nmbjumbo9: 225576
root@border:/etc # sysctl kern.ipc.nmbjumbo9=262144
kern.ipc.nmbjumbo9: 225576 -> 786432

root@border:/etc # sysctl kern.ipc.nmbjumbo16
kern.ipc.nmbjumbo16: 169180
root@border:/etc # sysctl kern.ipc.nmbjumbo16=262144
kern.ipc.nmbjumbo16: 169180 -> 1048576

root@border:/etc # sysctl net.inet.udp.recvspace
net.inet.udp.recvspace: 42080
root@border:/etc # sysctl net.inet.udp.recvspace=4194304
net.inet.udp.recvspace: 42080 -> 4194304

root@border:/etc # sysctl net.inet.raw.maxdgram
net.inet.raw.maxdgram: 16384
root@border:/etc # sysctl net.inet.raw.maxdgram=4194304
net.inet.raw.maxdgram: 16384 -> 4194304

root@border:/etc # sysctl net.inet.raw.recvspace
net.inet.raw.recvspace: 16384
root@border:/etc # sysctl net.inet.raw.recvspace=4194304
net.inet.raw.recvspace: 16384 -> 4194304

 

Еще обратил внимание на комментарий

# Do not use lager sockbufs on 8.0+
, а у меня стоит значение 83886080, его тоже лучше изменить на меньшее значение (брал из статьи dadv)?
Edited by NeXuSs

Share this post


Link to post
Share on other sites

Теперь в udp дропов по переполнению буферов нет:

udp:
       80382 datagrams received
       0 with incomplete header
       0 with bad data length field
       0 with bad checksum
       19 with no checksum
       6430 dropped due to no socket
       10368 broadcast/multicast datagrams undelivered
       0 dropped due to full socket buffers
       0 not for hashed pcb
       63584 delivered
       14207 datagrams output
       0 times multicast source filter matched

, но есть в ip разделе:

ip:
       908213429 total packets received
       70 bad header checksums
       0 with size smaller than minimum
       0 with data size < data length
       0 with ip length > max ip packet size
       0 with header length < data size
       0 with data length < header length
       0 with bad options
       0 with incorrect version number
       0 fragments received
       0 fragments dropped (dup or out of space)
       0 fragments dropped after timeout
       0 packets reassembled ok
       251902 packets for this host
       1634 packets for unknown/unsupported protocol
       440094959 packets forwarded (347895558 packets fast forwarded)
       1000280 packets not forwardable
       0 packets received for unknown multicast group
       0 redirects sent
       1377334 packets sent from this host
       0 packets sent with fabricated ip header
  ---> 418558 output packets dropped due to no bufs, etc.
       997780 output packets discarded due to no route
       0 output datagrams fragmented
       0 fragments created
       0 datagrams that can't be fragmented
       0 tunneling packets that can't find gif
       0 datagrams with bad address in header

По ощущениям сообщения

dhcpd: send_packet: No buffer space available
dhcpd: dhcp.c:1305: Failed to send 300 byte long packet over fallback interface.

появляются гораздо реже.

# ngctl list | wc -l
      2

Edited by NeXuSs

Share this post


Link to post
Share on other sites

Сетевой контроллер Intel E10G42BTDA Ethernet Converged Network Adapter X520-DA2, PCI-E, 10GB DUAL PORT

 

Из dmesg.boot:

ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.5.15> port 0xe880-0xe89f mem 0xf8e80000-0xf8efffff,0xf8e7c000-0xf8e7ffff irq 24 at device 0.0 on pci12
ix0: Using MSIX interrupts with 6 vectors
ix0: Ethernet address: 90:e2:ba:7c:1c:40
ix0: PCI Express Bus: Speed 5.0GT/s Width x8
ix1: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.5.15> port 0xec00-0xec1f mem 0xf8f80000-0xf8ffffff,0xf8f7c000-0xf8f7ffff irq 34 at device 0.1 on pci12
ix1: Using MSIX interrupts with 6 vectors
ix1: Ethernet address: 90:e2:ba:7c:1c:41
ix1: PCI Express Bus: Speed 5.0GT/s Width x8

Дрова фряшные 2.5.15, последние от Intel 2.5.21 не ставил.

Edited by NeXuSs

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.