Jump to content
Калькуляторы

FreeBsd 9.1 + 82598EB 10-Gigabit

в системе FreeBsd 9.1 + 82598EB 10-Gigabit на 1 порт карты подается порядка 1-1.5Gb трафика симметричного трафика

 

 netstat -I ix1 -hdw1
           input          (ix1)           output
  packets  errs idrops      bytes    packets  errs      bytes colls drops
     134k     0     0       131M       198k     0       145M     0     0
     143k     3     0       140M       209k     0       151M     0     0
     142k     1     0       130M       209k     0       146M     0     0
     142k     2     0       141M       208k     0       150M     0     0
     134k     0     0       133M       197k     0       141M     0     0
     141k     3     0       145M       199k     0       144M     0     0
     143k     0     0       141M       206k     0       148M     0     0
     144k     2     0       139M       215k     0       156M     0     0
     141k     1     0       143M       207k     0       153M     0     0
     148k     2     0       146M       209k     0       147M     0     0
     146k     3     0       149M       206k     0       155M     0     0
     150k     3     0       154M       219k     0       163M     0     0

last pid: 46958;  load averages:  2.52,  2.75,  2.74                                                                        up 22+16:17:04  21:58:15
139 processes: 8 running, 94 sleeping, 37 waiting
CPU 0:  0.0% user,  0.0% nice, 25.2% system, 29.5% interrupt, 45.3% idle
CPU 1:  0.0% user,  0.0% nice,  7.5% system, 29.5% interrupt, 63.0% idle
CPU 2:  0.0% user,  0.0% nice,  3.1% system, 36.6% interrupt, 60.2% idle
CPU 3:  0.0% user,  0.0% nice,  3.1% system, 32.7% interrupt, 64.2% idle
Mem: 576M Active, 600M Inact, 660M Wired, 31M Cache, 208M Buf, 60M Free
Swap: 4639M Total, 160K Used, 4639M Free

 PID USERNAME PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
  11 root     155 ki31     0K    64K RUN     1 395.3H 68.80% idle{idle: cpu1}
  11 root     155 ki31     0K    64K RUN     3 395.9H 67.19% idle{idle: cpu3}
  11 root     155 ki31     0K    64K CPU2    2 304.4H 66.36% idle{idle: cpu2}
  11 root     155 ki31     0K    64K CPU0    0 409.1H 43.80% idle{idle: cpu0}
   0 root     -92    0     0K   576K -       0 173.0H 20.90% kernel{em0 que}
  12 root     -92    -     0K   640K WAIT    3  41.3H 18.80% intr{irq285: ix1:que }
  12 root     -92    -     0K   640K CPU2    2  42.4H 18.07% intr{irq284: ix1:que }
  12 root     -92    -     0K   640K CPU0    0  40.9H 17.68% intr{irq282: ix1:que }
  12 root     -92    -     0K   640K WAIT    1  41.4H 17.38% intr{irq283: ix1:que }

какие опции крутить для 10G сетевушки ?

Share this post


Link to post
Share on other sites

подниму тему, вырос существенно трафик, сейчас порядка 3.5 гига

в часы пик картина на сервере такая

last pid: 89068;  load averages:  9.09,  9.22, 10.06                                    up 29+12:18:01  23:52:01
146 processes: 16 running, 100 sleeping, 30 waiting
CPU:  1.1% user,  0.0% nice, 18.3% system, 73.7% interrupt,  6.9% idle
Mem: 960M Active, 94M Inact, 671M Wired, 72M Cache, 208M Buf, 130M Free
Swap: 4639M Total, 57M Used, 4582M Free, 1% Inuse

  PID USERNAME PRI NICE   SIZE    RES STATE   C   TIME    CPU COMMAND
   12 root     -92    -     0K   640K WAIT    3 206.1H 50.78% intr{irq280: ix0:que }
   12 root     -92    -     0K   640K RUN     0 208.4H 45.46% intr{irq277: ix0:que }
   12 root     -92    -     0K   640K RUN     2 205.9H 43.46% intr{irq279: ix0:que }
   12 root     -92    -     0K   640K CPU1    1 206.8H 43.26% intr{irq278: ix0:que }
   12 root     -92    -     0K   640K CPU2    2 115.1H 31.30% intr{irq284: ix1:que }
   12 root     -92    -     0K   640K CPU0    0 113.4H 29.79% intr{irq282: ix1:que }
   12 root     -92    -     0K   640K WAIT    3 114.6H 29.39% intr{irq285: ix1:que }
   12 root     -92    -     0K   640K RUN     1 117.1H 28.86% intr{irq283: ix1:que }
   11 root     155 ki31     0K    64K RUN     1 315.5H 12.79% idle{idle: cpu1}
   11 root     155 ki31     0K    64K RUN     2 318.1H 11.77% idle{idle: cpu2}
   11 root     155 ki31     0K    64K RUN     3 318.1H 11.77% idle{idle: cpu3}
   11 root     155 ki31     0K    64K RUN     0 325.8H 11.18% idle{idle: cpu0}
    0 root     -92    0     0K   576K -       0 731:53  5.47% kernel{ix1 que}
    0 root     -92    0     0K   576K -       3 755:41  5.08% kernel{ix1 que}
    0 root     -92    0     0K   576K -       3 745:56  5.08% kernel{ix1 que}
    0 root     -92    0     0K   576K -       0 741:56  4.88% kernel{ix1 que}
    0 root     -92    0     0K   576K -       0 892:42  4.39% kernel{ix0 que}
    0 root     -92    0     0K   576K -       2 858:15  4.39% kernel{ix0 que}
    0 root     -92    0     0K   576K -       3 890:13  4.20% kernel{ix0 que}
    0 root     -92    0     0K   576K -       1 101.3H  3.76% kernel{dummynet}
    0 root     -92    0     0K   576K -       3 891:06  3.56% kernel{ix0 que}

свободного ресурса проца практически нет, все выжирается прерываниями сетевушки 10Г, стоит 82598EB 

Возможно ли что-то выжать или пора менять платформу на поболее cpu ?

Share this post


Link to post
Share on other sites

19 часов назад, Ivan_83 сказал:

4 ядра, 2 гига и фря 9 - закапывать.

Памяти подкинуть возможно, но проц не получится, только всю машину поменять. Быстро не возможно, в текущей ситуации.

Те упираемся в проц ?!

Share this post


Link to post
Share on other sites

20 часов назад, Ivan_83 сказал:

Уменьшать правила в ipfw, использовать сеты.

ipfw отжирает 12% в пиках, на шейпере.

Сегодня сетевушку переставил в другой слот x8, посмотрим какая картинка будет

 

UPD:

удалось поднять производительность, основное сетевуха в слоте x8,чуть тюнинга системы

дропы периодически проскакивают, cpu на максимум

            input        (Total)           output
   packets  errs idrops      bytes    packets  errs      bytes colls drops
      1.0M     0     0       824M       1.1M     0         1G     0     0
      1.0M     0     0       821M       1.1M     0         1G     0     0
      1.0M     0     0       839M       1.1M     0       998M     0     0
      1.0M     0     0       839M       1.1M     0         1G     0     0
      1.1M     0     0       843M       1.1M     0         1G     0     0
      1.0M     0     0       814M       1.1M     0       966M     0     0
      1.0M     0     0       831M       1.1M     0         1G     0     0
      1.0M     0     0       801M       1.1M     0       975M     0     0
      1.0M     0     0       818M       1.1M     0         1G     0     0
      1.0M     0     0       832M       1.1M     0         1G     0     0
      1.0M     0     0       831M       1.1M     0         1G     0     0
      1.0M     0     0       822M       1.1M     0       994M     0     0
last pid: 32093;  load averages: 11.16, 10.37,  9.75                                    up 0+16:01:12  22:33:26
151 processes: 17 running, 104 sleeping, 30 waiting
CPU 0:  1.2% user,  0.0% nice, 37.2% system, 57.0% interrupt,  4.7% idle
CPU 1:  3.5% user,  0.0% nice, 20.9% system, 74.4% interrupt,  1.2% idle
CPU 2:  0.0% user,  0.0% nice, 10.5% system, 88.4% interrupt,  1.2% idle
CPU 3:  2.3% user,  0.0% nice, 25.6% system, 70.9% interrupt,  1.2% idle
Mem: 973M Active, 268M Inact, 610M Wired, 208M Buf, 77M Free
Swap: 4639M Total, 4639M Free

  PID USERNAME PRI NICE   SIZE    RES STATE   C   TIME    CPU COMMAND
   12 root     -92    -     0K   640K WAIT    2 319:27 50.59% [intr{irq278: ix0:que }]
   12 root     -92    -     0K   640K RUN     1 311:12 49.46% [intr{irq277: ix0:que }]
   12 root     -92    -     0K   640K CPU0    0 329:37 49.17% [intr{irq276: ix0:que }]
   12 root     -92    -     0K   640K RUN     3 305:53 48.49% [intr{irq279: ix0:que }]
   12 root     -92    -     0K   640K CPU3    3 208:27 28.08% [intr{irq284: ix1:que }]
   12 root     -92    -     0K   640K CPU1    1 193:27 28.08% [intr{irq282: ix1:que }]
   12 root     -92    -     0K   640K RUN     0 194:33 27.69% [intr{irq281: ix1:que }]
   12 root     -92    -     0K   640K WAIT    2 199:33 26.66% [intr{irq283: ix1:que }]
    0 root     -92    0     0K   576K -       3 160:57 15.77% [kernel{dummynet}]
    0 root     -92    0     0K   576K -       2  25:12 11.38% [kernel{ix1 que}]
    0 root     -92    0     0K   576K -       2  24:32 11.28% [kernel{ix1 que}]
    0 root     -92    0     0K   576K -       2  23:38 10.50% [kernel{ix1 que}]
    0 root     -92    0     0K   576K -       3  23:34 10.06% [kernel{ix1 que}]
    0 root     -92    0     0K   576K -       2  14:55  6.59% [kernel{ix0 que}]
    0 root     -92    0     0K   576K -       3  13:11  6.05% [kernel{ix0 que}]
    0 root     -92    0     0K   576K RUN     2  13:54  5.76% [kernel{ix0 que}]
    0 root     -92    0     0K   576K -       0  13:47  5.66% [kernel{ix0 que}]
   11 root     155 ki31     0K    64K RUN     2 341:24  1.56% [idle{idle: cpu2}]
   11 root     155 ki31     0K    64K RUN     0 349:56  1.46% [idle{idle: cpu0}]
   11 root     155 ki31     0K    64K RUN     3 342:50  1.46% [idle{idle: cpu3}]
   11 root     155 ki31     0K    64K RUN     1 351:51  1.37% [idle{idle: cpu1}]
10567 bind      20    0   210M   178M uwait   2  12:06  1.27% /usr/sbin/named -4 -t /var/named -u bind{named}
10567 bind      20    0   210M   178M uwait   2  12:06  1.17% /usr/sbin/named -4 -t /var/named -u bind{named}
10567 bind      20    0   210M   178M uwait   2  12:07  1.07% /usr/sbin/named -4 -t /var/named -u bind{named}
10567 bind      20    0   210M   178M uwait   2  12:06  1.07% /usr/sbin/named -4 -t /var/named -u bind{named}
10567 bind      20    0   210M   178M RUN     2   6:22  0.10% /usr/sbin/named -4 -t /var/named -u bind{named}

как прибить прерывания сетевух к потокам проца жестко ?

Share this post


Link to post
Share on other sites

Сегодня в ЧНН снял шейпер , пропиcав ipfw allow all

           input        (Total)           output
   packets  errs idrops      bytes    packets  errs      bytes colls drops
      1.1M     0     0       915M       1.2M     0       1.1G     0     0
      1.2M     0     0       968M       1.2M     0       1.1G     0     0
      1.2M     0     0       969M       1.2M     0       1.1G     0     0
      1.1M     0     0       954M       1.2M     0       1.1G     0     0
gwinet# top -aSCHIP
last pid: 27816;  load averages:  4.04,  5.66,  7.58                    up 1+14:35:06  21:07:20
152 processes: 7 running, 107 sleeping, 38 waiting
CPU 0:  1.2% user,  0.0% nice,  1.9% system, 55.6% interrupt, 41.4% idle
CPU 1:  1.2% user,  0.0% nice,  1.9% system, 47.5% interrupt, 49.4% idle
CPU 2:  0.6% user,  0.0% nice,  0.6% system, 58.6% interrupt, 40.1% idle
CPU 3:  1.9% user,  0.0% nice,  2.5% system, 54.3% interrupt, 41.4% idle
Mem: 1076M Active, 136M Inact, 627M Wired, 38M Cache, 208M Buf, 51M Free
Swap: 4639M Total, 540K Used, 4638M Free

  PID USERNAME PRI NICE   SIZE    RES STATE   C   TIME    CPU COMMAND
   11 root     155 ki31     0K    64K RUN     0 800:25 43.26% [idle{idle: cpu0}]
   11 root     155 ki31     0K    64K RUN     3 916:30 42.58% [idle{idle: cpu3}]
   11 root     155 ki31     0K    64K CPU1    1 925:49 42.29% [idle{idle: cpu1}]
   11 root     155 ki31     0K    64K RUN     2 923:00 40.67% [idle{idle: cpu2}]
   12 root     -92    -     0K   640K WAIT    3 737:25 36.96% [intr{irq279: ix0:que }]
   12 root     -92    -     0K   640K CPU2    2 747:36 36.47% [intr{irq278: ix0:que }]
   12 root     -92    -     0K   640K WAIT    0 761:18 36.08% [intr{irq276: ix0:que }]
   12 root     -92    -     0K   640K CPU1    1 746:54 33.98% [intr{irq277: ix0:que }]
   12 root     -92    -     0K   640K WAIT    0 471:36 19.97% [intr{irq281: ix1:que }]
   12 root     -92    -     0K   640K WAIT    2 465:23 19.58% [intr{irq283: ix1:que }]
   12 root     -92    -     0K   640K WAIT    1 460:42 19.48% [intr{irq282: ix1:que }]
   12 root     -92    -     0K   640K WAIT    3 479:39 18.99% [intr{irq284: ix1:que }]

похоже нужно свои правила dummynet оптимизировать

Share this post


Link to post
Share on other sites

root@gwinet:~ # netstat -i
Name    Mtu Network       Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll

ix0    1500 <Link#5>      00:1b:21:2e:04:f3 211037381017     0 14351723 129992517344     0     0

ix1    1500 <Link#6>      00:1b:21:2e:04:f2 128521366331     0 694125 209005384702     0     0

наблюдаю Idrop на 10Г портах, по железу вроде нечего не трогали, со стороны коммутаторов ошибок нет.

 

Еще вопрос - согласно https://wiki.freebsd.org/10gFreeBSD/Intel10G 

dev.ix.0.fc: 3

не понимаю значение 3 , 0 - выкл, 1-вкл

 

И вот это, за что отвечает ?

harvest_interrupt="YES" # Entropy device harvests interrupt randomness
harvest_ethernet="YES"  # Entropy device harvests ethernet randomness
harvest_p_to_p="YES"    # Entropy device harvests point-to-point randomness

 

Share this post


Link to post
Share on other sites

	/* Check for a software override of the flow control settings, and
	 * setup the device accordingly.  If auto-negotiation is enabled, then
	 * software will have to set the "PAUSE" bits to the correct value in
	 * the Transmit Config Word Register (TXCW) and re-start auto-
	 * negotiation.  However, if auto-negotiation is disabled, then
	 * software will have to manually configure the two flow control enable
	 * bits in the CTRL register.
	 *
	 * The possible values of the "fc" parameter are:
	 *      0:  Flow control is completely disabled
	 *      1:  Rx flow control is enabled (we can receive pause frames,
	 *          but not send pause frames).
	 *      2:  Tx flow control is enabled (we can send pause frames but we
	 *          do not support receiving pause frames).
	 *      3:  Both Rx and Tx flow control (symmetric) are enabled.
	 */
	switch (hw->fc.current_mode) {
	case e1000_fc_none:
		/* Flow control completely disabled by a software over-ride. */
		txcw = (E1000_TXCW_ANE | E1000_TXCW_FD);
		break;
	case e1000_fc_rx_pause:
		/* Rx Flow control is enabled and Tx Flow control is disabled
		 * by a software over-ride. Since there really isn't a way to
		 * advertise that we are capable of Rx Pause ONLY, we will
		 * advertise that we support both symmetric and asymmetric Rx
		 * PAUSE.  Later, we will disable the adapter's ability to send
		 * PAUSE frames.
		 */
		txcw = (E1000_TXCW_ANE | E1000_TXCW_FD | E1000_TXCW_PAUSE_MASK);
		break;
	case e1000_fc_tx_pause:
		/* Tx Flow control is enabled, and Rx Flow control is disabled,
		 * by a software over-ride.
		 */
		txcw = (E1000_TXCW_ANE | E1000_TXCW_FD | E1000_TXCW_ASM_DIR);
		break;
	case e1000_fc_full:
		/* Flow control (both Rx and Tx) is enabled by a software
		 * over-ride.
		 */
		txcw = (E1000_TXCW_ANE | E1000_TXCW_FD | E1000_TXCW_PAUSE_MASK);
		break;
	default:
		DEBUGOUT("Flow control param set incorrectly\n");
		return -E1000_ERR_CONFIG;
		break;
	}

 

Share this post


Link to post
Share on other sites

Для роутера - да, лучше отключить, ИМХО, по крайней мере на приём.

Набирать энтропию на роутере от сетевух - избыточно: слишком много будет, притом что роутеру она не нужна.

Я вообще напрямую через sysctl настраивал.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.