Jump to content
Калькуляторы

7606 Input queue [IQD]

Подскажите почему переполняются очереди и дропаются пакеты?

 

с одной стороны 3750E(по счетчикам дропов нет) - с другой 7606 (см ниже)

 

 

TenGigabitEthernet1/1 is up, line protocol is up (connected)

Hardware is C6k 10000Mb 802.3, address is 0018.7383.57b0 (bia 0018.7383.57b0)

Description: Link-10G

MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,

reliability 255/255, txload 71/255, rxload 80/255

Encapsulation ARPA, loopback not set

Keepalive set (10 sec)

Full-duplex, 10Gb/s

input flow-control is off, output flow-control is off

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:30, output 00:00:11, output hang never

Last clearing of "show interface" counters 00:31:21

Input queue: 0/2000/34692376/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue: 0/40 (size/max)

5 minute input rate 3168815000 bits/sec, 685151 packets/sec

5 minute output rate 2809675000 bits/sec, 509848 packets/sec

1290488983 packets input, 750730664300 bytes, 0 no buffer

Received 6288878 broadcasts (4631800 multicasts)

 

 

7606#sh int summary

 

*: interface is up

IHQ: pkts in input hold queue IQD: pkts dropped from input queue

OHQ: pkts in output hold queue OQD: pkts dropped from output queue

RXBS: rx rate (bits/sec) RXPS: rx rate (pkts/sec)

TXBS: tx rate (bits/sec) TXPS: tx rate (pkts/sec)

TRTL: throttle count

 

Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL

-------------------------------------------------------------------------

* TenGigabitEthernet1/1 0 35075363 0 0 3168346000 685623 2808938000 509391 0

* TenGigabitEthernet1/2 0 0 0 0 162663000 40477 457168000 80242 0

* TenGigabitEthernet1/3 0 0 0 0 374956000 126190 978998000 15463

 

7606# sh processes cpu sorted | ex 0.00

CPU utilization for five seconds: 3%/0%; one minute: 3%; five minutes: 3%

PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process

10 18282132 232233547 78 1.11% 0.87% 0.89% 0 ARP Input

188 31714340 508301128 62 0.95% 0.82% 0.90% 0 IP Input

241 625164 2857125 218 0.07% 0.03% 0.02% 0 IPC LC Message H

63 996 19053 52 0.07% 0.29% 0.08% 2 Virtual Exec

 

 

7606#sh int te1/1 switching

TenGigabitEthernet1/1 Link-10G

Throttle count 0

Drops RP 37161885 SP 0

SPD Flushes Fast 0 SSE 0

SPD Aggress Fast 0

SPD Priority Inputs 0 Drops 0

 

Protocol Path Pkts In Chars In Pkts Out Chars Out

CDP Process 62557 26023712 62590 29292120

Cache misses 0

Fast 0 0 0 0

Auton/SSE 0 0 0 0

 

странно, но похоже счетчики в sh int te1/1 switching не обновляются

 

 

 

 

Share this post


Link to post
Share on other sites
Подскажите почему переполняются очереди и дропаются пакеты?

 

с одной стороны 3750E(по счетчикам дропов нет) - с другой 7606 (см ниже)

 

 

TenGigabitEthernet1/1 is up, line protocol is up (connected)

Hardware is C6k 10000Mb 802.3, address is 0018.7383.57b0 (bia 0018.7383.57b0)

Description: Link-10G

MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,

reliability 255/255, txload 71/255, rxload 80/255

Encapsulation ARPA, loopback not set

Keepalive set (10 sec)

Full-duplex, 10Gb/s

input flow-control is off, output flow-control is off

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:30, output 00:00:11, output hang never

Last clearing of "show interface" counters 00:31:21

Input queue: 0/2000/34692376/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue: 0/40 (size/max)

5 minute input rate 3168815000 bits/sec, 685151 packets/sec

5 minute output rate 2809675000 bits/sec, 509848 packets/sec

1290488983 packets input, 750730664300 bytes, 0 no buffer

Received 6288878 broadcasts (4631800 multicasts)

 

 

7606#sh int summary

 

*: interface is up

IHQ: pkts in input hold queue IQD: pkts dropped from input queue

OHQ: pkts in output hold queue OQD: pkts dropped from output queue

RXBS: rx rate (bits/sec) RXPS: rx rate (pkts/sec)

TXBS: tx rate (bits/sec) TXPS: tx rate (pkts/sec)

TRTL: throttle count

 

Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL

-------------------------------------------------------------------------

* TenGigabitEthernet1/1 0 35075363 0 0 3168346000 685623 2808938000 509391 0

* TenGigabitEthernet1/2 0 0 0 0 162663000 40477 457168000 80242 0

* TenGigabitEthernet1/3 0 0 0 0 374956000 126190 978998000 15463

 

7606# sh processes cpu sorted | ex 0.00

CPU utilization for five seconds: 3%/0%; one minute: 3%; five minutes: 3%

PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process

10 18282132 232233547 78 1.11% 0.87% 0.89% 0 ARP Input

188 31714340 508301128 62 0.95% 0.82% 0.90% 0 IP Input

241 625164 2857125 218 0.07% 0.03% 0.02% 0 IPC LC Message H

63 996 19053 52 0.07% 0.29% 0.08% 2 Virtual Exec

 

 

7606#sh int te1/1 switching

TenGigabitEthernet1/1 Link-10G

Throttle count 0

Drops RP 37161885 SP 0

SPD Flushes Fast 0 SSE 0

SPD Aggress Fast 0

SPD Priority Inputs 0 Drops 0

 

Protocol Path Pkts In Chars In Pkts Out Chars Out

CDP Process 62557 26023712 62590 29292120

Cache misses 0

Fast 0 0 0 0

Auton/SSE 0 0 0 0

 

странно, но похоже счетчики в sh int te1/1 switching не обновляются

 

 

sh ip int TenGigabitEthernet1/1 ?

Share this post


Link to post
Share on other sites

Судя по посту на форуме cisco стоит WS-X6704-10GE. Здесь всплывали проблемы с малым буфером на этих картах.

Share this post


Link to post
Share on other sites

Cisco IOS Interface and Hardware Component Command Reference

 

Entering the default fabric buffer-reserve queue command is the same as entering the fabric buffer-reserve queue command.
You can enter the fabric buffer-reserve command to improve the system throughput by reserving ASIC buffers.
This command is supported on the following modules:

•WS-X6704-10GE
•WS-X6748-SFP
•WS-X6748-GE-TX
•WS-X6724-SFP

Examples
This example shows how to reserve the high (0x5050) ASIC buffer spaces:

Router(config)# fabric buffer-reserve high
Router(config)#


This example shows how to reserve the low (0x3030) ASIC buffer spaces:
Router(config)# fabric buffer-reserve low
Router(config)#

Share this post


Link to post
Share on other sites

http://www.cisco.com/en/US/products/hw/rou...0800a7b80.shtml

caution Caution: Use this command only under the direction of Cisco TAC.

These are common circumstances where this command is useful:

 

* Line protocol goes down for multiple interfaces

* Overruns are seen on multiple interfaces

* Ports frequently leave and join EtherChannel

* TestMacNotification test repeatedly fails for line cards with DFC

Buffer Leaks

 

Below is an example of the output of the show buffers command:

 

Big buffers, 1524 bytes (total 1556, permanent 50):

52 in free list (5 min, 150 max allowed)

43670437 hits, 5134 misses, 0 trims, 1506 created

756 failures (0 no memory)

 

This output indicates a buffer leak in the big buffers pool. There is a total of 1556 big buffers in the router and only 52 are in the free list. Something is using all the buffers, and not freeing them. For more information on buffer leaks, see Troubleshooting Buffer Leaks.

вам наверно нужно посмотреть show buffers

что там пишет в Big buffers

 

по идее у вас in free list должден быть равен 0 или 64

и дропы должны быть

 

Edited by alks

Share this post


Link to post
Share on other sites
7600#sh fabric utilization

slot channel speed Ingress % Egress %

1 0 20G 1 5

1 1 20G 19 13

3 0 20G 7 8

4 0 20G 0 20

4 1 20G 14 8

5 0 20G 22 29

6 0 8G 7 0

 

7600#show buffers | in fail

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

0 failures (0 no memory)

 

7600#show buffers

Buffer elements:

1060 in free list (500 max allowed)

247412142 hits, 0 misses, 1119 created

 

Public buffer pools:

Small buffers, 104 bytes (total 1024, permanent 1024):

1013 in free list (128 min, 2048 max allowed)

173898231 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Medium buffers, 256 bytes (total 3000, permanent 3000):

3000 in free list (64 min, 3000 max allowed)

7848641 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Middle buffers, 600 bytes (total 512, permanent 512):

510 in free list (64 min, 1024 max allowed)

9107110 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Big buffers, 1536 bytes (total 1000, permanent 1000):

998 in free list (64 min, 1000 max allowed)

7214593 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

VeryBig buffers, 4520 bytes (total 10, permanent 10):

10 in free list (0 min, 300 max allowed)

491 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Large buffers, 9240 bytes (total 8, permanent 8):

8 in free list (0 min, 10 max allowed)

261 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Huge buffers, 18024 bytes (total 2, permanent 2, peak 4 @ 5d02h):

1 in free list (0 min, 4 max allowed)

29159 hits, 15 misses, 30 trims, 30 created

0 failures (0 no memory)

 

Interface buffer pools:

Syslog ED Pool buffers, 600 bytes (total 150, permanent 150):

118 in free list (150 min, 150 max allowed)

2305 hits, 0 misses

LI Middle buffers, 600 bytes (total 512, permanent 256, peak 512 @ 5d02h

256 in free list (256 min, 768 max allowed)

171 hits, 85 fallbacks, 0 trims, 256 created

0 failures (0 no memory)

256 max cache size, 256 in cache

0 hits in cache, 0 misses in cache

DMA-2 buffers, 1536 bytes (total 512, permanent 512):

256 in free list (0 min, 512 max allowed)

256 hits, 0 misses

256 max cache size, 256 in cache

0 hits in cache, 0 misses in cache

BFD Private Pool buffers, 1536 bytes (total 2400, permanent 2400):

2400 in free list (100 min, 3000 max allowed)

0 hits, 0 fallbacks, 0 trims, 0 created

0 failures (0 no memory)

LI Big buffers, 1536 bytes (total 512, permanent 256, peak 512 @ 5d02h):

256 in free list (256 min, 768 max allowed)

171 hits, 85 fallbacks, 0 trims, 256 created

0 failures (0 no memory)

256 max cache size, 256 in cache

0 hits in cache, 0 misses in cache

EOBC0/0 buffers, 1686 bytes (total 1536, permanent 1536):

654 in free list (0 min, 3072 max allowed)

882 hits, 0 fallbacks

768 max cache size, 364 in cache

28602163 hits in cache, 114 misses in cache

FIFO0/2 buffers, 1686 bytes (total 1536, permanent 1536):

258 in free list (0 min, 3072 max allowed)

1278 hits, 0 fallbacks

768 max cache size, 753 in cache

14359994 hits in cache, 510 misses in cache

IPC buffers, 4096 bytes (total 672, permanent 672):

590 in free list (224 min, 2240 max allowed)

856603 hits, 0 fallbacks, 0 trims, 0 created

0 failures (0 no memory)

LI Very Big buffers, 4520 bytes (total 257, permanent 128, peak 257 @ 5d

129 in free list (128 min, 384 max allowed)

85 hits, 43 fallbacks, 91 trims, 220 created

0 failures (0 no memory)

128 max cache size, 128 in cache

0 hits in cache, 0 misses in cache

Private Huge IPC buffers, 18024 bytes (total 2, permanent 2):

2 in free list (1 min, 4 max allowed)

0 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Private Huge buffers, 65280 bytes (total 2, permanent 2, peak 3 @ 1d03h)

2 in free list (1 min, 4 max allowed)

44388 hits, 0 misses, 1 trims, 1 created

0 failures (0 no memory)

 

Header pools:

 

Particle Clones:

1024 clones, 0 hits, 0 misses

 

Public particle pools:

Normal buffers, 1532 bytes (total 1024, permanent 1024):

1024 in free list (512 min, 2048 max allowed)

0 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

F/S buffers, 1590 bytes (total 512, permanent 512):

0 in free list (0 min, 512 max allowed)

512 hits, 0 misses

512 max cache size, 512 in cache

0 hits in cache, 0 misses in cache

 

Private particle pools:

IBC0/0 buffers, 640 bytes (total 512, permanent 512):

0 in free list (0 min, 512 max allowed)

512 hits, 0 fallbacks

512 max cache size, 256 in cache

220490525 hits in cache, 0 misses in cache

похоже дело в rspan часть трафика который терминируется на SVI откидывался в RSPAN . Снял мониторинг - дропов в очереди нет.

 

а по поводу fabric buffer-reserve high что-то много предупреждений от TAC

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this