Перейти к содержимому
Калькуляторы

yaadoo

Пользователи
  • Публикации

    57
  • Зарегистрирован

  • Посещение

Все публикации пользователя yaadoo


  1. BDCOM S3740F

    Да, спасибо Буду тестить
  2. Всем привет. Может кто-то поделится софтиной(без багов) и бутлоадером для этой железяки на данный момент такой софт стоит BDCOM(tm) S3740F Software, Version 2.1.1B Build 31067 Copyright by Shanghai Baud Data Communication CO. LTD. Compiled: 2015-10-21 13:17:9 by SYS_31067, Image text-base: 0x80008000 ROM: System Bootstrap, Version 0.4.1, Serial num:20014051523 Валом багов в нем
  3. Добрый день, коллеги! Какую самую последнюю версию софта можно поставить на X650? И можете ли скинуть это все дело?
  4. А есть у кого нибудь костыли автоматизации процесса регистрации и биндинга онушек? Руками делать добавлять и еще обучать тупую тех поддержку как это все делать не хочется. Под влан на юзера + один общий мультикаст влан. Онушки в качестве тупых мостов выступают.
  5. Добрый день! кто нибудь сталкивался с таким ? Slot-1 under.s.ki.x450-stack.2 # sh log 02/28/2016 11:08:23.79 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:4 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:08:09.11 <Erro:Kern.IPv4Mc.Error> Slot-2: Unable to Add IPmc sender entry s,G,v=c0a86301,e0000002,37 IPMC 54 flags 8012 unit 0, Entry exists 02/28/2016 11:07:51.78 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:07:40.77 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:07:23.79 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:4 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:06:54.81 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:06:42.81 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:06:41.12 <Erro:Kern.IPv4Mc.Error> Slot-2: Unable to Add IPmc sender entry s,G,v=c0a86301,e0000002,37 IPMC 54 flags 8012 unit 0, Entry exists 02/28/2016 11:06:23.78 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:4 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:05:57.79 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:05:44.78 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:05:23.80 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:4 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:05:13.64 <Erro:Kern.IPv4Mc.Error> Slot-2: Unable to Add IPmc sender entry s,G,v=c0a86301,e0000002,37 IPMC 54 flags 8012 unit 0, Entry exists 02/28/2016 11:05:00.77 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:04:46.77 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:04:23.79 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:4 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:04:03.81 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:03:48.79 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:03:46.80 <Erro:Kern.IPv4Mc.Error> Slot-2: Unable to Add IPmc sender entry s,G,v=c0a86301,e0000002,37 IPMC 54 flags 8012 unit 0, Entry exists 02/28/2016 11:03:23.77 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:4 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:03:06.78 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:02:50.80 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:10 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 02/28/2016 11:02:23.81 <Erro:EDP.ProcPDUFail> Slot-1: pdu received on port 1:4 could not be processed, invalid SNAP (llcSnapType = 300, edp_snap_id = bb) 3 екстрима 450 в стеке Slot-1 under.s.ki.x450-stack.2 # sh version Slot-1 : 800188-00-02 0650G-00138 Rev 2.0 BootROM: 1.0.2.0 IMG: 15.3.2.11 Slot-2 : 800188-00-06 0818G-80544 Rev 6.0 BootROM: 1.0.3.1 IMG: 15.3.2.11 Slot-3 : 800188-00-02 0704G-00261 Rev 2.0 BootROM: 1.0.3.1 IMG: 15.3.2.11 Slot-4 : Slot-5 : Slot-6 : Slot-7 : Slot-8 : XGM2-2xf-1 : 800151-00-06 0829G-00206 Rev 6.0 XGM2-2xf-2 : 800151-00-06 0840G-00236 Rev 6.0 Image : ExtremeXOS version 15.3.2.11 v1532b11 by release-manager on Tue Jun 18 15:59:06 EDT 2013 BootROM : 1.0.2.0 Diagnostics : 5.10
  6. ерроры на обоих интерфейсах сетефухах, трафа вход и на выход по 750 мбит на данный момент увеличились ошибки root@hgw2:/usr/home/hunt # netstat -w1 -h input (Total) output packets errs idrops bytes packets errs bytes colls 360K 142 0 351M 358K 0 351M 0 339K 227 0 326M 336K 0 326M 0 340K 268 0 335M 337K 0 334M 0 333K 145 0 324M 330K 0 323M 0 343K 227 0 333M 341K 0 333M 0 343K 212 0 334M 340K 0 334M 0 354K 129 0 345M 351K 0 344M 0 354K 451 0 344M 352K 0 343M 0 369K 267 0 354M 366K 0 353M 0 355K 62 0 347M 353K 0 347M 0 382K 214 0 364M 379K 0 363M 0 393K 710 0 376M 391K 0 375M 0 360K 116 0 344M 358K 0 344M 0 352K 98 0 347M 349K 0 346M 0 313K 37 0 312M 311K 0 311M 0 334K 55 0 331M 332K 0 330M 0 322K 113 0 316M 319K 0 316M 0 364K 48 0 349M 362K 0 348M 0 367K 684 0 354M 364K 0 353M 0 330K 205 0 321M 328K 0 320M 0 364K 146 0 351M 362K 0 350M 0
  7. Хз хз на счет модулей. У меня в продакшене стоит агрегирует район на 3 гбит трафа при переключении все модули без проблем апнулись. Модули - сборная солянка китайцев..)))
  8. как бы все это собрали, все в норме. На новом серванте есть лацп с igb интерефейсов (2шт) И летят ерроры packets errs idrops bytes packets errs bytes colls 305K 138 0 297M 302K 0 297M 0 269K 73 0 260M 266K 0 259M 0 247K 30 0 240M 244K 0 240M 0 271K 30 0 264M 268K 0 263M 0 289K 105 0 285M 287K 0 284M 0 290K 215 0 282M 287K 0 281M 0 278K 40 0 270M 276K 0 270M 0 281K 0 0 274M 279K 0 273M 0 279K 9 0 272M 276K 0 272M 0 281K 27 0 275M 279K 0 275M 0 298K 155 0 282M 296K 0 282M 0 307K 70 0 297M 305K 0 297M 0 299K 81 0 291M 296K 0 290M 0 300K 61 0 290M 298K 0 289M 0 280K 5 0 272M 277K 0 271M 0 292K 25 0 284M 289K 0 284M 0 288K 73 0 283M 285K 0 282M 0 279K 4 0 270M 276K 0 269M 0 280K 21 0 268M 277K 0 267M 0 310K 30 0 306M 308K 0 306M 0 322K 211 0 320M 320K 0 319M 0 Подтюнили немного, меньше но летят ошибки. Как это все дело починить по людски?
  9. Было решено сделать такой сервант и циске поднять pbr По результатам отпишу
  10. да и вообще есть планы на будущее перейти на CG-NAT кто что посоветует в плане вендора не сильно дорогого или там на крайняк софта с cg-nat
  11. есть аналогичный сервант с 2мя процами X5680 ,на данный момент простаивает без дела, я думаю он отнатит это все дело без проблем, ну разумеется с тюнингом
  12. все это дело терминирует этот товарищ )) Model number : WS-C3560G-24TS-S Может ему не дано сквозь себя пропустить 4-4.5гбит трафика Port-channel4 is up, line protocol is up (connected) Hardware is EtherChannel, address is 0022.9010.9515 (bia 0022.9010.9515) Description: LACP_1_GlTk MTU 1508 bytes, BW 4000000 Kbit/sec, DLY 10 usec, reliability 255/255, txload 12/255, rxload 146/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 1000Mb/s, link type is auto, media type is unknown input flow-control is off, output flow-control is unsupported Members in this channel: Gi0/19 Gi0/20 Gi0/21 Gi0/22 ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 17925 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 2299149000 bits/sec, 204605 packets/sec 5 minute output rate 192762000 bits/sec, 100887 packets/sec 643222245181 packets input, 874359542186017 bytes, 0 no buffer Received 2033737650 broadcasts (2031102220 multicasts) 0 runts, 8 giants, 0 throttles 44 input errors, 25 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 2031102220 multicast, 0 pause input 0 input packets with dribble condition detected 320059416252 packets output, 110685041899648 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out Да и в сетке чуть больше 2к абонов
  13. вот как бы онлайн, смотрю и вижу packets errs idrops bytes packets errs bytes colls 595K 0 0 572M 587K 0 564M 0 494K 0 0 501M 482K 0 491M 0 400K 0 0 399M 389K 0 392M 0 345K 0 10K 332M 344K 0 324M 0 256K 0 506 245M 246K 0 239M 0 161K 0 11K 164M 158K 0 155M 0 139K 0 11K 107M 135K 0 100M 0 108K 0 13K 92M 107K 0 85M 0 89K 0 11K 70M 85K 0 63M 0 71K 0 5.3K 49M 63K 0 44M 0 46K 0 8.6K 30M 43K 0 26M 0 42K 0 13K 26M 41K 0 20M 0 46K 0 8.9K 26M 41K 0 21M 0 494K 0 11K 268M 494K 0 259M 0 727K 0 0 677M 720K 0 666M 0 803K 0 0 750M 797K 0 738M 0 802K 0 0 775M 807K 0 763M 0 799K 0 0 757M 783K 0 747M 0 746K 0 0 719M 739K 0 709M 0 721K 0 0 701M 713K 0 690M 0 694K 0 0 673M 688K 0 663M 0 input (Total) output packets errs idrops bytes packets errs bytes colls 705K 0 0 685M 700K 0 675M 0 649K 0 0 647M 642K 0 638M 0 634K 0 0 619M 627K 0 610M 0 589K 0 0 591M 582K 0 583M 0 595K 0 0 575M 588K 0 567M 0 581K 0 0 566M 574K 0 558M 0 606K 0 0 578M 599K 0 568M 0 593K 0 0 591M 587K 0 583M 0 592K 0 0 584M 585K 0 576M 0 533K 0 0 527M 523K 0 519M 0 437K 0 0 436M 425K 0 427M 0 380K 0 0 365M 370K 0 359M 0 282K 0 7.7K 273M 278K 0 266M 0 203K 0 10K 190M 198K 0 182M 0 155K 0 4.9K 132M 148K 0 127M 0 114K 0 12K 92M 113K 0 87M 0 86K 0 11K 70M 83K 0 63M 0 83K 0 13K 58M 78K 0 50M 0 61K 0 8.3K 42M 55K 0 36M 0 49K 0 11K 33M 45K 0 27M 0 50K 0 12K 30M 48K 0 24M 0 input (Total) output packets errs idrops bytes packets errs bytes colls 48K 0 8.8K 26M 44K 0 22M 0 628K 0 4.8K 393M 625K 0 383M 0 765K 0 0 717M 767K 0 705M 0 831K 0 0 763M 815K 0 749M 0 807K 0 0 762M 801K 0 752M 0 830K 0 0 767M 825K 0 755M 0 766K 0 0 747M 761K 0 737M 0 726K 0 0 701M 719K 0 692M 0 657K 0 0 643M 651K 0 634M 0 653K 0 0 627M 646K 0 617M 0 631K 0 0 606M 624K 0 597M 0 612K 0 0 595M 605K 0 586M 0 586K 0 0 572M 580K 0 564M 0 573K 0 0 556M 566K 0 547M 0 608K 0 0 573M 602K 0 564M 0 602K 0 0 587M 595K 0 578M 0 575K 0 0 568M 567K 0 560M 0 537K 0 0 526M 528K 0 518M 0 471K 0 0 471M 462K 0 464M 0 407K 0 0 401M 396K 0 393M 0 359K 0 0 330M 351K 0 325M 0 input (Total) output packets errs idrops bytes packets errs bytes colls 212K 0 18K 235M 213K 0 225M 0 116K 0 14K 101M 111K 0 91M 0 121K 0 10K 100M 117K 0 92M 0 104K 0 9.8K 78M 99K 0 71M 0 92K 0 6.9K 64M 86K 0 58M 0 65K 0 8.5K 47M 62K 0 43M 0 54K 0 9.4K 37M 50K 0 32M 0 51K 0 9.6K 34M 47K 0 28M 0 55K 0 8.8K 31M 51K 0 26M 0 544K 0 7.5K 343M 541K 0 333M 0 752K 0 1.2K 690M 753K 0 678M 0 778K 0 0 730M 763K 0 719M 0 753K 0 0 733M 748K 0 721M 0 762K 0 0 715M 756K 0 704M 0 749K 0 0 716M 743K 0 706M 0 685K 0 0 667M 679K 0 658M 0 670K 0 0 638M 663K 0 628M 0 639K 0 0 625M 632K 0 616M 0 585K 0 0 576M 578K 0 568M 0 588K 0 0 578M 581K 0 570M 0 584K 0 0 566M 578K 0 558M 0 нетграф пофиксил днем, больше там фейлов не видел
  14. там гипертрейдинг там отключем и там 2 камня
  15. вот собственно сам пф root@hgw:/usr/home/hunt # cat /etc/pf.conf set optimization aggressive set limit { states 1800000, frags 300000, src-nodes 60000, table-entries 400000} set timeout { adaptive.start 0, adaptive.end 0, tcp.established 60, tcp.first 20 } #Clients nat pass on vlan701 from {192.168.0.0/16} to any -> x.x.x.x/28 source-hash #Services nat pass on vlan701 from 172.16.0.0/29 to any -> x.x.x.x/28 source-hash + пару там правил есть рдр для своих нужд net.graph.maxalloc и net.graph.maxdata уже поправил и ребутнул это все дело
  16. вот INFO: Status: Enabled for 48 days 06:39:24 Debug: Urgent State Table Total Rate current entries 78208 searches 1312848594345 314743.9/s inserts 3539612904 848.6/s removals 3539534619 848.6/s Counters match 749032978311 179574.1/s bad-offset 0 0.0/s fragment 4205 0.0/s short 370768 0.1/s normalize 0 0.0/s memory 0 0.0/s bad-timestamp 0 0.0/s congestion 0 0.0/s ip-option 36 0.0/s proto-cksum 0 0.0/s state-mismatch 9449725 2.3/s state-insert 585617 0.1/s state-limit 0 0.0/s src-limit 0 0.0/s synproxy 0 0.0/s TIMEOUTS: tcp.first 20s tcp.opening 5s tcp.established 60s tcp.closing 60s tcp.finwait 30s tcp.closed 30s tcp.tsdiff 10s udp.first 60s udp.single 30s udp.multiple 60s icmp.first 20s icmp.error 10s other.first 60s other.single 30s other.multiple 60s frag 30s interval 10s adaptive.start 0 states adaptive.end 0 states src.track 0s LIMITS: states hard limit 1800000 src-nodes hard limit 60000 frags hard limit 300000 table-entries hard limit 400000 OS FINGERPRINTS: 710 fingerprints loaded на момент глюков было current entries больше 94 000 и вот вмстат root@hgw:/usr/home/hunt # vmstat -z ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP UMA Kegs: 384, 0, 117, 3, 117, 0, 0 UMA Zones: 1664, 0, 117, 1, 117, 0, 0 UMA Slabs: 80, 0, 10609, 41, 11528, 0, 0 UMA RCntSlabs: 88, 0, 29188, 17, 29188, 0, 0 UMA Hash: 256, 0, 3, 12, 11, 0, 0 4 Bucket: 32, 0, 86, 2789, 3212137, 0, 0 6 Bucket: 48, 0, 30, 2792, 3534073, 0, 0 8 Bucket: 64, 0, 124, 4464, 3192975, 11, 0 12 Bucket: 96, 0, 914, 1382, 1968647, 0, 0 16 Bucket: 128, 0, 408, 1390, 492069, 0, 0 32 Bucket: 256, 0, 628, 1757, 2371871, 51, 0 64 Bucket: 512, 0, 375, 1945, 7642449, 53, 0 128 Bucket: 1024, 0, 1503, 385, 7284883, 0, 0 256 Bucket: 2048, 0, 1384, 528,1535864569, 12, 0 vmem btag: 56, 0, 39266, 2553, 542824, 295, 0 VM OBJECT: 256, 0, 79380, 1080,146713516, 0, 0 RADIX NODE: 144, 0, 49765, 2156,200663492, 0, 0 MAP: 240, 0, 3, 61, 3, 0, 0 KMAP ENTRY: 128, 0, 10, 393, 10, 0, 0 MAP ENTRY: 128, 0, 1761, 2269,430931399, 0, 0 VMSPACE: 448, 0, 39, 312, 3477938, 0, 0 fakepg: 104, 0, 0, 760, 51, 0, 0 mt_zone: 4112, 0, 379, 0, 379, 0, 0 16: 16, 0, 2905, 3119,170391020, 0, 0 32: 32, 0, 4917, 2708,12065904, 0, 0 64: 64, 0, 13337, 25909,357896786, 0, 0 128: 128, 0, 6158, 9993,47859601, 0, 0 256: 256, 0, 64946, 10579,64027573521, 0, 0 512: 512, 0, 6758, 6874,11327357, 0, 0 1024: 1024, 0, 121, 155, 3879376, 0, 0 2048: 2048, 0, 109, 91, 9667091, 0, 0 4096: 4096, 0, 1466, 61,34806985, 0, 0 8192: 8192, 0, 34, 10, 1956, 0, 0 16384: 16384, 0, 13, 17, 3359, 0, 0 32768: 32768, 0, 8, 15, 76851, 0, 0 65536: 65536, 0, 64, 16, 282101, 0, 0 64 pcpu: 8, 0, 2295, 1545, 2745, 0, 0 SLEEPQUEUE: 80, 0, 316, 831, 316, 0, 0 Files: 80, 0, 111, 1539,167937209, 0, 0 TURNSTILE: 136, 0, 316, 304, 316, 0, 0 rl_entry: 40, 0, 145, 1955, 145, 0, 0 umtx pi: 96, 0, 0, 0, 0, 0, 0 MAC labels: 40, 0, 0, 0, 0, 0, 0 PROC: 1216, 0, 56, 106, 3477939, 0, 0 THREAD: 1168, 0, 297, 18, 297, 0, 0 cpuset: 72, 0, 154, 231, 181, 0, 0 audit_record: 1248, 0, 0, 0, 0, 0, 0 mbuf_packet: 256, 3254145, 40963, 16721,459698855809, 0, 0 mbuf: 256, 3254145, 15, 19731,603870129599, 0, 0 mbuf_cluster: 2048, 508466, 57684, 24, 57684, 0, 0 mbuf_jumbo_page: 4096, 254229, 0, 334,13712993, 0, 0 mbuf_jumbo_9k: 9216, 75327, 0, 0, 0, 0, 0 mbuf_jumbo_16k: 16384, 42371, 0, 0, 0, 0, 0 mbuf_ext_refcnt: 4, 0, 0, 0, 0, 0, 0 NetGraph items: 72, 4123, 0, 1395,33093257, 0, 0 NetGraph data items: 72, 16399, 0, 16399,63164498848,2432419062, 0 g_bio: 248, 0, 0, 3904,14406579, 0, 0 ttyinq: 160, 0, 300, 625, 2430, 0, 0 ttyoutq: 256, 0, 157, 638, 1261, 0, 0 DMAR_MAP_ENTRY: 120, 0, 0, 0, 0, 0, 0 ata_request: 336, 0, 0, 0, 0, 0, 0 vtnet_tx_hdr: 24, 0, 0, 0, 0, 0, 0 FPU_save_area: 576, 0, 0, 0, 0, 0, 0 VNODE: 472, 0, 131686, 298,24893238, 0, 0 VNODEPOLL: 112, 0, 0, 0, 0, 0, 0 BUF TRIE: 144, 0, 558, 52389, 3646793, 0, 0 S VFS Cache: 108, 0, 110458, 28422,22443394, 0, 0 STS VFS Cache: 148, 0, 0, 0, 0, 0, 0 L VFS Cache: 328, 0, 29969, 571, 3122171, 0, 0 LTS VFS Cache: 368, 0, 0, 0, 0, 0, 0 NAMEI: 1024, 0, 0, 156,383658710, 0, 0 DIRHASH: 1024, 0, 2273, 67, 2293, 0, 0 NCLNODE: 528, 0, 0, 0, 0, 0, 0 Mountpoints: 816, 0, 3, 27, 3, 0, 0 pipe: 744, 0, 6, 154, 4629217, 0, 0 procdesc: 128, 0, 0, 0, 0, 0, 0 ksiginfo: 112, 0, 153, 1247, 223716, 0, 0 itimer: 352, 0, 0, 297, 35074, 0, 0 pf mtags: 40, 0, 0, 2300, 87431, 0, 0 pf states: 296, 1800006, 23209, 113122,3559373116, 0, 0 pf state keys: 88, 0, 631912, 202478,7129082576, 0, 0 pf source nodes: 136, 60001, 0, 0, 0, 0, 0 pf table entries: 160, 400000, 0, 0, 0, 0, 0 pf table counters: 64, 0, 0, 0, 0, 0, 0 pf frags: 80, 0, 0, 0, 0, 0, 0 pf frag entries: 32, 300000, 0, 0, 0, 0, 0 pf state scrubs: 40, 0, 0, 0, 0, 0, 0 KNOTE: 128, 0, 2, 1517,25123987, 0, 0 socket: 696, 261330, 28, 2782,31210912, 0, 0 ipq: 56, 15904, 0, 2059, 162115, 0, 0 udp_inpcb: 392, 261330, 3, 317,14004247, 0, 0 udpcb: 16, 261542, 3, 2758,14004247, 0, 0 tcp_inpcb: 392, 261330, 11, 3449, 4737194, 0, 0 tcpcb: 1024, 261332, 11, 1461, 4737194, 0, 0 tcptw: 88, 27810, 0, 3150, 808361, 0, 0 syncache: 160, 15375, 0, 5325, 6983566, 0, 0 hostcache: 136, 15370, 3, 693, 1995, 0, 0 tcpreass: 40, 31800, 0, 2300, 48117, 0, 0 sackhole: 32, 0, 0, 2125, 179, 0, 0 sctp_ep: 1408, 261330, 0, 0, 0, 0, 0 sctp_asoc: 2416, 40000, 0, 0, 0, 0, 0 sctp_laddr: 48, 80012, 0, 1577, 20, 0, 0 sctp_raddr: 728, 80000, 0, 0, 0, 0, 0 sctp_chunk: 136, 400026, 0, 0, 0, 0, 0 sctp_readq: 104, 400026, 0, 0, 0, 0, 0 sctp_stream_msg_out: 104, 400026, 0, 0, 0, 0, 0 sctp_asconf: 40, 400000, 0, 0, 0, 0, 0 sctp_asconf_ack: 48, 400060, 0, 0, 0, 0, 0 udplite_inpcb: 392, 261330, 0, 0, 0, 0, 0 ripcb: 392, 261330, 0, 290, 28962, 0, 0 unpcb: 240, 261344, 14, 658,11822294, 0, 0 rtentry: 200, 0, 54, 446, 54, 0, 0 IPFW dynamic rule: 120, 8217, 0, 1386, 35961, 0, 0 selfd: 56, 0, 189, 2864,121317091, 0, 0 SWAPMETA: 288, 1016925, 0, 0, 0, 0, 0 FFS inode: 168, 0, 131656, 594,24892819, 0, 0 FFS1 dinode: 128, 0, 0, 0, 0, 0, 0 FFS2 dinode: 256, 0, 131656, 599,24892819, 0, 0 IpAcct: 2032, 0, 51, 8299,30490711, 0, 0
  17. держлася траф продолжитеное время на 3.1 гбит, потом внезапно упал до 2.6 и как бы юзеры ловят все глюки показания тебе самые что выше, ну за исключением ппс, стало меньше
  18. вот собственно ппс root@hgw:/usr/home/hunt # netstat -w1 -h input (Total) output packets errs idrops bytes packets errs bytes colls 741K 0 0 717M 733K 0 706M 0 751K 0 0 739M 747K 0 729M 0 713K 0 0 691M 705K 0 681M 0 750K 0 0 712M 744K 0 701M 0 759K 0 0 729M 751K 0 718M 0 755K 0 0 734M 748K 0 723M 0 731K 0 0 698M 725K 0 689M 0 761K 0 0 742M 752K 0 731M 0 773K 0 0 751M 766K 0 739M 0 743K 0 0 727M 738K 0 718M 0 731K 0 0 705M 724K 0 695M 0 743K 0 0 721M 737K 0 710M 0 736K 0 0 720M 729K 0 709M 0 693K 0 0 683M 686K 0 674M 0 741K 0 0 703M 734K 0 692M 0 748K 0 0 727M 741K 0 716M 0 728K 0 0 714M 721K 0 704M 0 736K 0 0 714M 729K 0 704M 0 743K 0 0 722M 736K 0 711M 0 749K 0 0 732M 744K 0 721M 0 764K 0 0 749M 755K 0 738M 0 input (Total) output packets errs idrops bytes packets errs bytes colls 738K 0 0 714M 731K 0 703M 0 711K 0 0 703M 706K 0 694M 0 727K 0 0 711M 721K 0 701M 0 708K 0 0 696M 701K 0 686M 0 проц last pid: 46177; load averages: 6.89, 6.84, 6.64 up 48+04:24:21 20:32:33 197 processes: 17 running, 124 sleeping, 56 waiting CPU 0: 0.0% user, 0.0% nice, 15.2% system, 60.2% interrupt, 24.6% idle CPU 1: 0.0% user, 0.0% nice, 13.3% system, 64.1% interrupt, 22.7% idle CPU 2: 0.0% user, 0.0% nice, 10.5% system, 61.3% interrupt, 28.1% idle CPU 3: 0.0% user, 0.0% nice, 12.1% system, 63.3% interrupt, 24.6% idle CPU 4: 0.0% user, 0.0% nice, 10.9% system, 72.3% interrupt, 16.8% idle CPU 5: 0.0% user, 0.0% nice, 13.7% system, 60.2% interrupt, 26.2% idle CPU 6: 0.0% user, 0.0% nice, 17.2% system, 62.1% interrupt, 20.7% idle CPU 7: 0.0% user, 0.0% nice, 13.3% system, 64.8% interrupt, 21.9% idle Mem: 13M Active, 2310M Inact, 813M Wired, 1664K Cache, 837M Buf, 4807M Free Swap: 3852M Total, 3852M Free PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 12 root -92 - 0K 992K CPU5 5 307.4H 64.16% intr{irq261: ix0:que } 12 root -92 - 0K 992K RUN 0 281.5H 63.18% intr{irq256: ix0:que } 12 root -92 - 0K 992K WAIT 4 309.6H 62.26% intr{irq260: ix0:que } 12 root -92 - 0K 992K CPU1 1 308.0H 61.67% intr{irq257: ix0:que } 12 root -92 - 0K 992K WAIT 7 306.5H 61.08% intr{irq263: ix0:que } 12 root -92 - 0K 992K CPU2 2 309.0H 60.25% intr{irq258: ix0:que } 12 root -92 - 0K 992K CPU6 6 309.0H 59.47% intr{irq262: ix0:que } 12 root -92 - 0K 992K CPU3 3 304.3H 58.89% intr{irq259: ix0:que } 11 root 155 ki31 0K 128K RUN 2 832.8H 29.20% idle{idle: cpu2} 11 root 155 ki31 0K 128K RUN 7 831.1H 28.47% idle{idle: cpu7} 11 root 155 ki31 0K 128K RUN 3 837.1H 28.08% idle{idle: cpu3} 11 root 155 ki31 0K 128K RUN 1 833.5H 27.49% idle{idle: cpu1} 11 root 155 ki31 0K 128K CPU6 6 828.9H 27.10% idle{idle: cpu6} 11 root 155 ki31 0K 128K RUN 0 790.2H 26.17% idle{idle: cpu0} 11 root 155 ki31 0K 128K RUN 5 830.3H 25.00% idle{idle: cpu5} 11 root 155 ki31 0K 128K RUN 4 828.1H 24.76% idle{idle: cpu4} 0 root -92 0 0K 528K - 7 439:41 21.39% kernel{ix0 que} 0 root -92 0 0K 528K CPU0 0 466:34 18.55% kernel{ix0 que} 0 root -92 0 0K 528K - 5 472:23 17.58% kernel{ix0 que} 0 root -92 0 0K 528K - 0 451:43 15.87% kernel{ix0 que} 0 root -92 0 0K 528K - 0 461:58 9.96% kernel{ix0 que} 0 root -92 0 0K 528K - 5 778:18 9.28% kernel{ix0 que} 0 root -92 0 0K 528K - 7 402:04 7.96% kernel{ix0 que} 0 root -92 0 0K 528K - 6 420:59 7.86% kernel{ix0 que} 936 root 20 -15 70708K 32936K nanslp 4 24.4H 1.46% perl5.18.4 75201 root 20 0 70612K 19680K select 6 349:45 0.29% snmpd 15 root -16 - 0K 16K - 3 651:50 0.10% rand_harvestq нетстат root@hgw:/usr/home/hunt # netstat -m 41221/36215/77436 mbufs in use (current/cache/total) 41134/16574/57708/508466 mbuf clusters in use (current/cache/total/max) 41140/16550 mbuf+clusters out of packet secondary zone in use (current/cache) 0/334/334/254229 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/75327 9k jumbo clusters in use (current/cache/total/max) 0/0/0/42371 16k jumbo clusters in use (current/cache/total/max) 92573K/43537K/136111K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters delayed (4k/9k/16k) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile вмстат root@hgw:/usr/home/hunt # vmstat -i interrupt total rate irq1: atkbd0 24 0 irq23: uhci1 ehci1 368 0 cpu0:timer 8225406809 1975 irq256: ix0:que 0 27119765524 6514 irq257: ix0:que 1 29242254657 7024 irq258: ix0:que 2 29424106916 7067 irq259: ix0:que 3 29597046939 7109 irq260: ix0:que 4 29114673628 6993 irq261: ix0:que 5 28987362717 6962 irq262: ix0:que 6 29034368437 6974 irq263: ix0:que 7 29229344295 7020 irq264: ix0:link 252713 0 irq265: ix1:que 0 13 0 irq273: ix1:link 4 0 irq274: em0 4162518 0 irq275: em1 4162521 0 irq276: ahci0:ch0 4747120 1 irq278: ahci0:ch2 175 0 cpu7:timer 8564319087 2057 cpu1:timer 8231023432 1977 cpu3:timer 8671367719 2082 cpu2:timer 8679660440 2084 cpu5:timer 8361807288 2008 cpu6:timer 8566605093 2057 cpu4:timer 8373512043 2011 Total 299435950480 71925 На данный момент 3.3гбит трафика
  19. в лоадер конф еще осталось от em и igb root@hgw:/usr/home/hunt # cat /boot/loader.conf #geom_mirror_load=YES pf_load="YES" ng_ipacct="YES" ng_ipfw_load="YES" dummynet_load="YES" if_lagg_load="YES" if_vlan_load="YES" net.graph.maxdata=16384 hw.em.rxd=4096 hw.em.txd=4096 hw.em.max_interrupt_rate=32000 hw.igb.rxd=4096 hw.igb.txd=4096 hw.igb.lro=0 hw.igb.fc_setting=0 net.isr.defaultqlimit=4096 net.link.ifqmaxlen=10240 net.isr.maxthreads=8 net.isr.numthreads=4 net.isr.bindthreads=1 allow_unsupported_sfp=1 sysctl root@hgw:/usr/home/hunt # cat /etc/sysctl.conf # $FreeBSD: src/etc/sysctl.conf,v 1.8.34.1.8.1 2012/03/03 06:15:13 kensmith Exp $ # # This file is read when going to multi-user and its contents piped thru # ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details. # # Uncomment this to prevent users from seeing information about processes that # are being run under another UID. #security.bsd.see_other_uids=0 ###network card### #dev.em.0.rx_kthreads=4 dev.em.0.rx_int_delay=600 dev.em.0.tx_int_delay=600 dev.em.0.rx_abs_int_delay=4000 dev.em.0.tx_abs_int_delay=4000 #dev.em.0.rx_processing_limit=4096 #dev.em.1.rx_kthreads=4 dev.em.1.rx_int_delay=600 dev.em.1.tx_int_delay=600 dev.em.1.rx_abs_int_delay=4000 dev.em.1.tx_abs_int_delay=4000 #dev.em.1.rx_processing_limit=4096 dev.igb.0.rx_processing_limit=1024 dev.igb.1.rx_processing_limit=1024 dev.igb.0.fc=0 dev.igb.1.fc=0 dev.em.0.fc=0 dev.em.1.fc=0 ####DUMMYNET#### net.inet.ip.fw.dyn_buckets=16384 net.inet.ip.dummynet.hash_size=4096 net.inet.ip.fw.dyn_max=8192 net.inet.ip.dummynet.io_fast=1 net.inet.ip.dummynet.expire=0 net.inet.ip.dummynet.pipe_slot_limit=900 ###OTHER#### net.inet.icmp.drop_redirect=1 net.inet.ip.redirect=0 net.inet.ip.fastforwarding=1 net.route.netisr_maxqlen=4096 net.isr.defaultqlimit=4096 kern.ipc.nmbclusters=508466 kern.ipc.maxsockbuf=83886080 net.inet.ip.intr_queue_maxlen=10240 net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 net.inet.icmp.icmplim=1000 net.link.lagg.0.use_flowid=0 net.link.lagg.default_use_flowid=0 net.link.lagg.1.use_flowid=0 чуть позже скинуть все остальное, когда начнет глючить
  20. Добрый день, коллеги! Есть такая версия FreeBSD hgw 10.1-RELEASE FreeBSD 10.1-RELEASE такой проц CPU: Intel(R) Xeon(R) CPU L5420 @ 2.50GHz (2500.05-MHz K8-class CPU) cpu0: <ACPI CPU> on acpi0 cpu1: <ACPI CPU> on acpi0 cpu2: <ACPI CPU> on acpi0 cpu3: <ACPI CPU> on acpi0 cpu4: <ACPI CPU> on acpi0 cpu5: <ACPI CPU> on acpi0 cpu6: <ACPI CPU> on acpi0 cpu7: <ACPI CPU> on acpi0 est: CPU supports Enhanced Speedstep, but is not recognized. p4tcc0: <CPU Frequency Thermal Control> on cpu0 est: CPU supports Enhanced Speedstep, but is not recognized. p4tcc1: <CPU Frequency Thermal Control> on cpu1 est: CPU supports Enhanced Speedstep, but is not recognized. p4tcc2: <CPU Frequency Thermal Control> on cpu2 est: CPU supports Enhanced Speedstep, but is not recognized. p4tcc3: <CPU Frequency Thermal Control> on cpu3 est: CPU supports Enhanced Speedstep, but is not recognized. p4tcc4: <CPU Frequency Thermal Control> on cpu4 est: CPU supports Enhanced Speedstep, but is not recognized. p4tcc5: <CPU Frequency Thermal Control> on cpu5 est: CPU supports Enhanced Speedstep, but is not recognized. p4tcc6: <CPU Frequency Thermal Control> on cpu6 est: CPU supports Enhanced Speedstep, but is not recognized. p4tcc7: <CPU Frequency Thermal Control> on cpu7 SMP: AP CPU #7 Launched! SMP: AP CPU #1 Launched! SMP: AP CPU #3 Launched! SMP: AP CPU #2 Launched! SMP: AP CPU #5 Launched! SMP: AP CPU #6 Launched! SMP: AP CPU #4 Launched! памяти вот столько root@hgw:/usr/home/hunt # dmesg | grep -i memory real memory = 9126805504 (8704 MB) avail memory = 8276307968 (7892 MB) Помогите с ноля это все дело подтюнить , натит где-то в пиках 2.5 гбит трафа, появляются глюки, падает скорость и т.д. По форумах смотрели инфу, тюнили это все, но без успеха. Сеть недавно заменили на 10гбит - интел Особого тюнинга на данный момент на сервере вообще нет. Заранее спасибо!
  21. mstp в продакшене есть, корень etrteme x670 ну и eltex MES3124F и dlink 3420. время сходимости этого всего дела, 1-2 сек. Пока без нареканий.
  22. HP 5130

    Нормальный коммутатор. qinq\selective qinq работает отлично. С LACP тоже все хорошо, без багов. RSTP не тестилось, нет нужды в нем. Сфп модули всех жрет. как Л2 желека пойдет. Нареканий нет
  23. А какой вообще диапазон цен на железяки RDP?
  24. Всем привет А подскажите пожалуйста, на 650й софтина 15.3 это максимум ? или можно лить все что угодно? Пробовал залить 15.7 x650-Under.3 # download image 10.10.10.10 summitX-15.7.1.4-patch1-1.xos vr "VR-Default" primary Do you want to install image after downloading? (y - yes, n - no, <cr> - cancel) Yes Downloading to Switch...............................................................................................Error: Failed to download image - tftp: write error