kinder Posted November 28, 2012 Posted November 28, 2012 (edited) Всем привет Прошу помощи в решении проблемы. Имеется сеть. В сети Стоит 3 сервера Два IBM 3250 m3 и один HP DL 160. Стоит freebsd 7.4, MPD5, quagga На обоих IBM стоят Intel Pro Dual на Dl 160 родные сетевушки. Каждый сервер подключен 2-мя ногами к catalist 3750E. На одной сетевой мир. На второй локальные вланы. С недавнего времени стали падать интерфейсы на IBM'ах. Точнее линки горят, но пакеты не ходят. В логах серверов нет нечего. На кошке Nov 28 20:48:21: %SW_MATM-4-MACFLAP_NOTIF: Host 70ca.9b0b.a810 in vlan 328 is flapping between port Gi1/0/5 and port Te1/0/1 Nov 28 20:48:21: %SW_MATM-4-MACFLAP_NOTIF: Host 70ca.9b0b.a810 in vlan 324 is flapping between port Gi1/0/5 and port Te1/0/1 Nov 28 20:48:29: %SW_MATM-4-MACFLAP_NOTIF: Host 70ca.9b0b.a810 in vlan 328 is flapping between port Gi1/0/5 and port Te1/0/1 Nov 28 20:48:38: %SW_MATM-4-MACFLAP_NOTIF: Host 0021.5e68.2105 in vlan 324 is flapping between port Gi1/0/5 and port Te1/0/1 Nov 28 20:48:39: %SW_MATM-4-MACFLAP_NOTIF: Host 0021.5e68.2105 in vlan 319 is flapping between port Gi1/0/5 and port Te1/0/1 Te1/0/1 - смотрит в сеть Gi1/0/5 - на интерфейс браса с локальными vlan'ами Проблема уходит после выключения и включения интерфейса. Прошу помощи как решить вопрос с падениями интерфейсов на IBM. С DL проблем нет совсем. В гугле не нашел не чего полезного. Буду рад любой ссылке или совету. Edited November 28, 2012 by kinder Вставить ник Quote
kinder Posted November 28, 2012 Author Posted November 28, 2012 Немного информации vmstat -z ITEM SIZE LIMIT USED FREE REQUESTS FAILURES UMA Kegs: 216, 0, 88, 14, 89, 0 UMA Zones: 280, 0, 88, 3, 89, 0 UMA Slabs: 128, 0, 32279, 433, 296538, 0 UMA RCntSlabs: 128, 0, 3571, 25, 3571, 0 UMA Hash: 256, 0, 4, 11, 8, 0 16 Bucket: 152, 0, 139, 11, 139, 0 32 Bucket: 280, 0, 148, 6, 148, 0 64 Bucket: 536, 0, 169, 6, 193, 216 128 Bucket: 1048, 0, 3318, 0, 84776, 805 VM OBJECT: 216, 0, 59783, 3343, 231608506, 0 MAP: 256, 0, 7, 23, 7, 0 KMAP ENTRY: 112, 276441, 74, 982, 33724233, 0 MAP ENTRY: 112, 0, 2518, 2201, 447727472, 0 DP fakepg: 120, 0, 0, 0, 0, 0 SG fakepg: 120, 0, 0, 0, 0, 0 mt_zone: 1032, 0, 261, 48, 261, 0 16: 16, 0, 7152, 1584, 557393965, 0 32: 32, 0, 18007, 3607, 46496369412, 0 64: 64, 0, 22737, 2855, 446719773, 0 128: 128, 0, 150515, 23804, 159320764, 0 256: 256, 0, 29652, 5163, 47496934, 0 512: 512, 0, 13454, 2478, 27140155, 0 1024: 1024, 0, 5079, 1541, 117211830, 0 2048: 2048, 0, 2533, 801, 319183, 0 4096: 4096, 0, 9877, 2031, 35448653, 0 Files: 128, 0, 240, 1645, 413660359, 0 TURNSTILE: 152, 0, 1945, 95, 1945, 0 umtx pi: 96, 0, 0, 0, 0, 0 PROC: 1160, 0, 87, 189, 14269872, 0 THREAD: 920, 0, 291, 1653, 20691095, 0 UPCALL: 88, 0, 0, 0, 0, 0 SLEEPQUEUE: 64, 0, 1945, 183, 1945, 0 VMSPACE: 432, 0, 30, 177, 14269827, 0 cpuset: 72, 0, 2, 98, 2, 0 audit_record: 992, 0, 0, 0, 0, 0 mbuf_packet: 256, 0, 61, 1603, 244829707, 0 mbuf: 256, 0, 3049, 4377, 212213265295, 0 mbuf_cluster: 2048, 25600, 4698, 2236, 83659790875, 0 mbuf_jumbo_pagesize: 4096, 12800, 0, 104, 73555, 0 mbuf_jumbo_9k: 9216, 6400, 0, 0, 0, 0 mbuf_jumbo_16k: 16384, 3200, 0, 0, 0, 0 mbuf_ext_refcnt: 4, 0, 0, 0, 0, 0 ACL UMA zone: 388, 0, 0, 0, 0, 0 g_bio: 216, 0, 0, 14580, 2137383, 0 ata_request: 312, 0, 0, 24, 31, 0 ata_composite: 352, 0, 0, 0, 0, 0 VNODE: 504, 0, 108733, 3659, 4031313, 0 VNODEPOLL: 128, 0, 1, 57, 1, 0 S VFS Cache: 104, 0, 124750, 3338, 4060428, 0 L VFS Cache: 327, 0, 1163, 4609, 120921, 0 NAMEI: 1024, 0, 0, 112, 91552282, 0 DIRHASH: 1024, 0, 1716, 60, 1742, 0 NFSMOUNT: 656, 0, 0, 0, 0, 0 NFSNODE: 704, 0, 0, 0, 0, 0 pipe: 744, 0, 7, 98, 11573033, 0 ksiginfo: 112, 0, 228, 828, 228, 0 itimer: 360, 0, 0, 20, 1, 0 KNOTE: 120, 0, 0, 372, 147570, 0 socket: 720, 25600, 164, 1636, 339378919, 0 ipq: 56, 819, 0, 189, 23, 0 udp_inpcb: 288, 25610, 24, 1627, 335764411, 0 udpcb: 16, 25704, 24, 2160, 335764411, 0 inpcb: 288, 25610, 13, 169, 14605, 0 tcpcb: 728, 25600, 13, 112, 14605, 0 tcptw: 88, 5124, 0, 252, 179, 0 syncache: 128, 15370, 0, 174, 14589, 0 hostcache: 136, 15372, 5, 135, 130, 0 tcpreass: 40, 1680, 0, 420, 7706, 0 sackhole: 32, 0, 0, 202, 16, 0 sctp_ep: 1232, 25602, 0, 0, 0, 0 sctp_asoc: 2208, 40000, 0, 0, 0, 0 sctp_laddr: 48, 80064, 0, 144, 1, 0 sctp_raddr: 584, 80003, 0, 0, 0, 0 sctp_chunk: 144, 400010, 0, 0, 0, 0 sctp_readq: 104, 400032, 0, 0, 0, 0 sctp_stream_msg_out: 96, 400026, 0, 0, 0, 0 sctp_asconf: 40, 400008, 0, 0, 0, 0 sctp_asconf_ack: 48, 400032, 0, 0, 0, 0 ripcb: 288, 25610, 2, 63, 364, 0 unpcb: 248, 25605, 19, 206, 3599320, 0 rtentry: 248, 0, 6819, 891, 867260, 0 SWAPMETA: 288, 116519, 0, 0, 0, 0 Mountpoints: 896, 0, 5, 7, 5, 0 FFS inode: 184, 0, 108691, 3659, 4031150, 0 FFS1 dinode: 128, 0, 0, 0, 0, 0 FFS2 dinode: 256, 0, 108691, 3779, 4031150, 0 IPFW dynamic rule: 120, 0, 0, 0, 0, 0 NetGraph items: 72, 4140, 77, 1453, 9488941436, 0 NetGraph data items: 72, 540, 1, 539, 96411301260, 80079 NetFlow cache: 80, 262170, 43865, 18685, 842937354, 0 ifconfig | grep vlan vlan342: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 vlan: 342 parent interface: em1 vlan343: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 vlan: 343 parent interface: em1 vlan344: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 vlan: 344 parent interface: em1 vlan345: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 vlan: 345 parent interface: em1 vlan346: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 vlan: 346 parent interface: em1 vlan347: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 vlan: 347 parent interface: em1 Трафик в час пик bras-02# netstat -w1 -h input (Total) output packets errs bytes packets errs bytes colls 396K 0 322M 372K 0 249M 0 473K 0 334M 491K 0 408M 0 472K 0 324M 490K 0 400M 0 453K 0 314M 472K 0 388M 0 457K 0 315M 476K 0 391M 0 464K 0 327M 482K 0 399M 0 459K 0 320M 478K 0 396M 0 477K 0 334M 494K 0 407M 0 477K 0 335M 495K 0 410M 0 Вставить ник Quote
Ivan_83 Posted November 28, 2012 Posted November 28, 2012 NetGraph data items: 72, 540, 1, 539, 96411301260, 80079 net.graph.maxalloc="65535" # Maximum number of non-data queue items to allocate / limit the damage of a leak net.graph.maxdata="65535" # Maximum number of data queue items to allocate / limit the damage of a DoS Вставить ник Quote
kinder Posted December 3, 2012 Author Posted December 3, 2012 2 дня полет нормальный! Пока наблюдаю. Дополнительно провел тюнинг по следующим статьям. http://dadv.livejournal.com/138951.html http://dadv.livejournal.com/139170.html Вставить ник Quote
roysbike Posted December 3, 2012 Posted December 3, 2012 2 дня полет нормальный! Пока наблюдаю. Дополнительно провел тюнинг по следующим статьям. http://dadv.livejournal.com/138951.html http://dadv.livejournal.com/139170.html а у вас какие карты стоят? Intel 82576 ? можете показать top -SHPI в пике Вставить ник Quote
kinder Posted December 3, 2012 Author Posted December 3, 2012 39Y6126 Intel PRO/1000 PT Dual Port Server Adapter(82571) Чуть позже статистику скину Вставить ник Quote
kinder Posted December 3, 2012 Author Posted December 3, 2012 last pid: 81596; load averages: 0.58, 0.54, 0.51 up 4+00:14:07 20:46:41 102 processes: 9 running, 77 sleeping, 16 waiting CPU 0: 16.2% user, 0.0% nice, 5.3% system, 5.3% interrupt, 73.3% idle CPU 1: 0.4% user, 0.0% nice, 1.1% system, 84.2% interrupt, 14.3% idle CPU 2: 4.9% user, 0.0% nice, 0.4% system, 65.4% interrupt, 29.3% idle CPU 3: 2.3% user, 0.0% nice, 0.0% system, 58.6% interrupt, 39.1% idle Mem: 142M Active, 1198M Inact, 1184M Wired, 144K Cache, 827M Buf, 5343M Free Swap: 8196M Total, 8196M Free PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 44 root -64 - 0K 16K CPU1 1 73.9H 82.67% irq18: atapci1 14 root 171 ki31 0K 16K RUN 0 70.4H 71.58% idle: cpu0 38 root -68 - 0K 16K CPU2 2 37.3H 65.97% irq259: em1 33 root -68 - 0K 16K CPU3 3 35.5H 61.96% irq256: em0 11 root 171 ki31 0K 16K RUN 3 53.4H 37.99% idle: cpu3 12 root 171 ki31 0K 16K RUN 2 54.0H 31.69% idle: cpu2 4906 root 70 0 24872K 11976K select 3 901:11 22.36% snmpd 13 root 171 ki31 0K 16K RUN 1 20.4H 15.38% idle: cpu1 4843 root 48 0 332M 139M select 0 417:40 7.18% mpd5 4843 root 48 0 332M 139M select 0 0:00 6.69% mpd5 3402 root 48 0 13212K 7136K select 0 31:40 3.86% zebra 40 root -68 - 0K 16K WAIT 1 151:46 3.47% irq260: em1 4843 root 48 0 332M 139M select 3 0:00 2.59% mpd5 35 root -68 - 0K 16K WAIT 0 81:54 2.10% irq257: em0 16 root -32 - 0K 16K WAIT 0 337:29 1.56% swi4: clock sio Вставить ник Quote
roysbike Posted December 3, 2012 Posted December 3, 2012 last pid: 81596; load averages: 0.58, 0.54, 0.51 up 4+00:14:07 20:46:41 102 processes: 9 running, 77 sleeping, 16 waiting CPU 0: 16.2% user, 0.0% nice, 5.3% system, 5.3% interrupt, 73.3% idle CPU 1: 0.4% user, 0.0% nice, 1.1% system, 84.2% interrupt, 14.3% idle CPU 2: 4.9% user, 0.0% nice, 0.4% system, 65.4% interrupt, 29.3% idle CPU 3: 2.3% user, 0.0% nice, 0.0% system, 58.6% interrupt, 39.1% idle Mem: 142M Active, 1198M Inact, 1184M Wired, 144K Cache, 827M Buf, 5343M Free Swap: 8196M Total, 8196M Free PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 44 root -64 - 0K 16K CPU1 1 73.9H 82.67% irq18: atapci1 14 root 171 ki31 0K 16K RUN 0 70.4H 71.58% idle: cpu0 38 root -68 - 0K 16K CPU2 2 37.3H 65.97% irq259: em1 33 root -68 - 0K 16K CPU3 3 35.5H 61.96% irq256: em0 11 root 171 ki31 0K 16K RUN 3 53.4H 37.99% idle: cpu3 12 root 171 ki31 0K 16K RUN 2 54.0H 31.69% idle: cpu2 4906 root 70 0 24872K 11976K select 3 901:11 22.36% snmpd 13 root 171 ki31 0K 16K RUN 1 20.4H 15.38% idle: cpu1 4843 root 48 0 332M 139M select 0 417:40 7.18% mpd5 4843 root 48 0 332M 139M select 0 0:00 6.69% mpd5 3402 root 48 0 13212K 7136K select 0 31:40 3.86% zebra 40 root -68 - 0K 16K WAIT 1 151:46 3.47% irq260: em1 4843 root 48 0 332M 139M select 3 0:00 2.59% mpd5 35 root -68 - 0K 16K WAIT 0 81:54 2.10% irq257: em0 16 root -32 - 0K 16K WAIT 0 337:29 1.56% swi4: clock sio не плохо. Видимо проц хороший. У вас em0 uplink?а на em1 вланы и pppoe слушает? Вставить ник Quote
kinder Posted December 4, 2012 Author Posted December 4, 2012 > У вас em0 uplink?а на em1 вланы и pppoe слушает? Так и есть. Железо бралось заведомо для возможности использовать 10G. Но как то не сложилось! Freebsd не захотела дружить с сетевой 10G. Скоро начну танцы. Вставить ник Quote
roysbike Posted December 4, 2012 Posted December 4, 2012 (edited) > У вас em0 uplink?а на em1 вланы и pppoe слушает? Так и есть. Железо бралось заведомо для возможности использовать 10G. Но как то не сложилось! Freebsd не захотела дружить с сетевой 10G. Скоро начну танцы. я заказал карту 10g fiber. на чипе 82599. Но для бордера , за бордером 5 PPPoE cерверов Edited December 4, 2012 by roysbike Вставить ник Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.