Asset ID: |
1-72-2029693.1 |
Update Date: | 2018-03-07 |
Keywords: | |
Solution Type
Problem Resolution Sure
Solution
2029693.1
:
Brocade FC Switch Can Cause Link Reset for Credit Recovery on F_Ports (Slow Drain Device)
Related Items |
- Brocade DCX Backbone
- SPARC T5-2
|
Related Categories |
- PLA-Support>Sun Systems>DISK>HBA>SN-DK: FC HBA
|
In this Document
Created from <SR 3-10725840801>
Applies to:
Brocade DCX Backbone - Version All Versions and later
SPARC T5-2 - Version All Versions and later
Information in this document applies to any platform.
Symptoms
There are several Solaris 10 servers with Oracle Emulex FC HBA LPe16002-M6 connected to the SAN (Brocade switches)
accessing disks and tape devices.
From time to time there are link down / link up messages
May 13 09:12:50 server01 emlxs: [ID 349649 kern.info] [ 5.03EF]emlxs2: NOTICE: 710: Link down.
May 13 09:12:51 server01 emlxs: [ID 349649 kern.info] [ 5.03EF]emlxs3: NOTICE: 710: Link down.
May 13 09:12:58 server01 emlxs: [ID 349649 kern.info] [ 5.062D]emlxs2: NOTICE: 720: Link up. (8Gb, fabric, initiator)
May 13 09:13:01 server01 emlxs: [ID 349649 kern.info] [ 5.062D]emlxs3: NOTICE: 720: Link up. (8Gb, fabric, initiator)
Whatever is triggering this issue is affecting several Solaris 10 servers, but not at the same time,
ie on server02 same errors are seeing some hours later (on this case it is enabled emlxs extended logging):
May 13 20:35:35 server02 emlxs: [ID 349649 kern.info] [ 5.03EF]emlxs1: NOTICE: 710: Link down.
May 13 20:35:36 server02 emlxs: [ID 349649 kern.info] [ 5.03EF]emlxs0: NOTICE: 710: Link down.
May 13 20:35:55 server02 emlxs: [ID 349649 kern.info] [16.08BA]emlxs1: DEBUG:1801: FCF detail. (fcf_linkup_notify: FCFTAB_OFFLINE flag=0 fcfi_online=0. FCFTAB Link up. >)
May 13 20:35:55 server02 emlxs: [ID 349649 kern.info] [16.10C0]emlxs1: DEBUG:1801: FCF detail. (fc_fcftab_linkup_evt_action:0 FCFTAB_OFFLINE:E_LINKUP arg=0 gen=0. Link up.)
May 13 20:35:56 server02 emlxs: [ID 349649 kern.info] [16.08BA]emlxs0: DEBUG:1801: FCF detail. (fcf_linkup_notify: FCFTAB_OFFLINE flag=0 fcfi_online=0. FCFTAB Link up. >)
May 13 20:35:56 server02 emlxs: [ID 349649 kern.info] [16.10C0]emlxs0: DEBUG:1801: FCF detail. (fc_fcftab_linkup_evt_action:0 FCFTAB_OFFLINE:E_LINKUP arg=0 gen=0. Link up.)
May 13 20:35:56 server02 emlxs: [ID 349649 kern.info] [ 5.062D]emlxs1: NOTICE: 720: Link up. (8Gb, fabric, initiator)
May 13 20:35:57 server02 emlxs: [ID 349649 kern.info] [ 5.062D]emlxs0: NOTICE: 720: Link up. (8Gb, fabric, initiator)
The link down/up messages that you are seeing on hosts are due to the following possible scenarios:
(a) physical issues (media/optics/hba), which may cause link reset recovery --> on this case there are no errors on porterrshow, so discarded
(b) misbehaving device sending its own link resets
(c) to link resets being made by the switch for credit recovery on F_Ports (slow drain device)
On Brocade FC swicth on the ports where the FC HBA is connected, ie port 4/41 (313) , there are events like this:
2015/05/27-06:53:40:068142, [AN-1003], 8651/1717, SLOT 7 | FID 126, WARNING, SWB, Latency bottleneck on F-Port 4/41. 0.33 pct. of 300 secs. affected. Avg. delay 49 us. Avg. slowdown 79., traf.c, line: 4459, comp:trafd, ltime:2015/05/27-06:53:40:059666
2015/05/27-06:53:41:688829, [MAPS-1002], 8652/1718, SLOT 7 | FID 126, ERROR, SWB, Port 4/41, Condition=ALL_HOST_PORTS(C3TXTO/min>3), Current Value:[C3TXTO,35 Timeouts], RuleName=defALL_HOST_PORTSC3TXTO_3, Dashboard Category=Port Health., actionHndlr.c, line: 755, comp:raslog, ltime:2015/05/27-06:53:41:688766
2015/05/27-06:53:42:581100, [C2-1014], 8654/1719, SLOT 7 | CHASSIS, WARNING, BRZDCX_B, Link Reset on Port S4,P313(33) vc_no=0 crd(s)lost=80 auto trigger., OID:0x43428021, c2_ops.c, line: 5948, comp:insmod, ltime:2015/05/27-06:53:42:579502
2015/05/27-06:53:47:702532, [MAPS-1003], 8655/1720, SLOT 7 | FID 126, WARNING, SWB, Port 4/41, Condition=NON_E_F_PORTS(LF/min>3), Current Value:[LF,43], RuleName=defNON_E_F_PORTSLF_3, Dashboard Category=Port Health., actionHndlr.c, line: 755, comp:raslog, ltime:2015/05/27-06:53:47:702464
2015/05/27-06:53:47:791413, [MAPS-1003], 8656/1721, SLOT 7 | FID 126, WARNING, SWB, Port 4/41, Condition=ALL_HOST_PORTS(LF/min>3), Current Value:[LF,43], RuleName=defALL_HOST_PORTSLF_3, Dashboard Category=Port Health., actionHndlr.c, line: 755, comp:raslog, ltime:2015/05/27-06:53:47:791353
2015/05/27-06:54:05:675546, [MAPS-1003], 8657/1722, SLOT 7 | FID 126, WARNING, SWB, Port 4/41, Condition=ALL_OTHER_F_PORTS(LF/min>3), Current Value:[LF,123], RuleName=defALL_OTHER_F_PORTSLF_3, Dashboard Category=Port Health., actionHndlr.c, line: 755, comp:raslog, ltime:2015/05/27-06:54:05:675480
2015/05/27-06:54:11:404890, [AN-1005], 8658/1723, SLOT 7 | FID 126, INFO, SWB, Port 4/41 has Latency bottleneck cleared., traf.c, line: 4462, comp:trafd, ltime:2015/05/27-06:54:11:404830
Bottleneck detection detects latency towards the device attached to the port. MAPS throws up some events, and as well recovery resets the link, and finally the bottleneck detector announces latency bottleneck is cleared.
Changes
FC HBA was replaced and problem persist.
Cause
On this particular case, these messages are reported because the bottleneck monitor feature "credit recovery on back end ports" on Brocade FC switch is working as designed.
The messages Link down / Link up above are link resets on an Brocade FC switch **internal link**.
These soft link resets are expected behaviour where bottleneck recovery is implemented,
and the soft reset simply resets credits at both ends of the link.
Solution
Contact with FC switch vendor to troubleshoot this problem further, the solution may be different depending on each situation.
Some additional information about types of bottlenecks
The bottleneck detection feature detects two types of bottlenecks:
1. Latency bottleneck
A latency bottleneck is a port where the offered load exceeds the rate at which the other end of the link
can continuously accept traffic, but does not exceed the physical capacity of the link. This condition
can be caused by a device attached to the fabric that is slow to process received frames and send
back credit returns. A latency bottleneck caused by such a device can spread through the fabric and
can slow down unrelated flows that share links with the slow flow.
By default, bottleneck detection detects latency bottlenecks that are severe enough that they cause 98
percent loss of throughput. This default value can be modified to a different percentage.
2. Congestion bottleneck
A congestion bottleneck is a port that is unable to transmit frames at the offered rate because the
offered rate is greater than the physical data rate of the line. For example, this condition can be
caused by trying to transfer data at 8 Gbps over a 4 Gbps ISL.
You can use the bottleneckMon command to configure separate alert thresholds for congestion and
latency bottlenecks.
Advanced settings allow you to refine the criterion for defining latency bottleneck conditions to allow
for more (or less) sensitive monitoring at the sub-second level. For example, you would use the
advanced settings to change the default value of 98 percent for loss of throughput.
Refer to Advanced bottleneck detection settings on Chapter "Bottleneck Detection" of "Fabric OS Administrator’s Guide FOS 7.3.0" for specific details.
If a bottleneck is reported, you can investigate and optimize the resource allocation for the fabric.
Using the zone setup and Top Talkers, you can also determine which flows are destined to any affected F_Ports.
Some additional information as a possible Solution Approach , in case of an bottleneck detection,
from the Fabric OS Administrators Guide Supporting Fabric OS 7.3.0
You can use the bottleneck detection feature with other Adaptive Networking features to optimize the
performance of your fabric. For example, you can do the following:
- If the bottleneck detection feature detects a latency bottleneck, you can use TI zones or QoS
SID/DID traffic prioritization to isolate latency device traffic from high priority application traffic.
- If the bottleneck detection feature detects ISL congestion, you can use ingress rate limiting to slow
down low priority application traffic, if it is contributing to the congestion.
- Traffic Isolation Zoning
Traffic Isolation Zoning (TI zoning) allows you to control the flow of interswitch traffic by creating a
dedicated path for traffic flowing from a specific set of source ports (F_Ports). Traffic Isolation Zoning
does not require a license. Refer to Traffic Isolation Zoning on page 343 for more information about this
feature.
- Quality of Service (QoS)
QoS allows you to categorize the traffic flow between a host and target as having a high, medium, or
low priority. QoS does not require a license. Refer to QoS on page 379 for more information about this
feature.
================================
Here are some notes from Brocade escalation :
TSM server is the Oracle Solaris 10 server
HOST NAME : server01
HOST TYPE : SPARC T5-2
HOST OS : Solaris 10 1/13 s10s_u11wos_24a SPARC
HOST UPTIME: 20:09, 1 user, load average: 4.33, 3.15, 3.20
C# INST# PORT WWN MODEL FCODE STATUS DEVICE PATH
-- ----- -------- ----- ----- ------ -----------
c8 emlxs2 10000090fa83XXX8 7101684 (LPe16002-M6) 4.03a1 CONNECTED /pci@380/pci@1/pci@0/pci@5/SUNW,emlxs@0 <<-- New
c9 emlxs3 10000090fa83XXX9 7101684 (LPe16002-M6) 4.03a1 CONNECTED /pci@380/pci@1/pci@0/pci@5/SUNW,emlxs@0,1 <<-- New
c8 = emlxs2 (fp5) -> /devices/pci@380/pci@1/pci@0/pci@5/SUNW,emlxs@0/fp@0,0:devctl
================================================================================
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 3ce00 0 50060e80101dcc33 50060e80101dcc33 0x0 (Disk device)
1 3ce40 0 50060e80101dcc32 50060e80101dcc32 0x0 (Disk device)
2 3d840 0 50060e8007e33e09 50060e8007e33e09 0x0 (Disk device)
3 3f9c0 0 50060e8007e33e01 50060e8007e33e01 0x0 (Disk device)
4 79700 0 50060e8007e33f09 50060e8007e33f09 0x0 (Disk device)
5 79780 0 50060e8007e33f01 50060e8007e33f01 0x0 (Disk device)
6 7fc00 0 50060e80101dc972 50060e80101dc972 0x0 (Disk device)
7 7fd00 0 50060e80101dc973 50060e80101dc973 0x0 (Disk device)
8 4b180 0 10000090fa83XXX8 20000090fa83XXX8 0x1f (Unknown Type,Host Bus Adapter)
c9 = emlxs3 (fp8) -> /devices/pci@380/pci@1/pci@0/pci@5/SUNW,emlxs@0,1/fp@0,0:devctl
================================================================================
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 2c580 0 10000090fa0a21eb 20000090fa0a21eb 0x8 (Medium changer device)
1 2c5c0 0 10000090fa0a21ea 20000090fa0a21ea 0x8 (Medium changer device)
2 2f1c1 0 2101000d77cba962 2001000d77cba962 0x8 (Medium changer device)
3 86300 0 500104f000a3be91 500104f000a3be90 0x1 (Tape device)
4 87200 0 500104f000a3beb2 500104f000a3beb1 0x1 (Tape device)
5 87300 0 500104f000a3beb8 500104f000a3beb7 0x1 (Tape device)
6 87400 0 500104f000a3be79 500104f000a3be78 0x1 (Tape device)
7 87500 0 500104f000a3be7f 500104f000a3be7e 0x1 (Tape device)
8 87f00 0 500104f000a3be49 500104f000a3be48 0x1 (Tape device)
9 8f000 0 500104f000a3be3a 500104f000a3be39 0x1 (Tape device)
10 8fe80 0 500104f000a3bed0 500104f000a3becf 0x1 (Tape device)
11 12b000 0 500104f000a3bfb1 500104f000a3bfb0 0x1 (Tape device)
12 12b441 0 2101000d77caf522 2001000d77caf522 0x8 (Medium changer device)
13 12b600 0 500104f000a3bf93 500104f000a3bf92 0x1 (Tape device)
14 12b640 0 500104f000a3bf99 500104f000a3bf98 0x1 (Tape device)
15 12b680 0 500104f000a3bfa2 500104f000a3bfa1 0x1 (Tape device)
16 12b6c0 0 500104f000a3bf78 500104f000a3bf77 0x1 (Tape device)
17 12b700 0 500104f000a3bf69 500104f000a3bf68 0x1 (Tape device)
18 12b740 0 500104f000a3bf51 500104f000a3bf50 0x1 (Tape device)
19 12c580 0 10000090fa0a1ecb 20000090fa0a1ecb 0x8 (Medium changer device)
20 12c5c0 0 10000090fa0a1eca 20000090fa0a1eca 0x8 (Medium changer device)
21 12f5c0 0 500104f000a3bf63 500104f000a3bf62 0x1 (Tape device)
22 2b180 0 10000090fa83XXX9 20000090fa83XXX9 0x1f (Unknown Type,Host Bus Adapter)
On this particular case, every time there was a bottleneck on port 313, this port 313 goes offline/online in both fabrics at same time,
the FC HBA Dual port and each port goes to port 313 of each fabric
Found file:./SW8D0B-S7cp-201506091312.SSHOW_SYS.txt
CURRENT CONTEXT -- 2 , 126
Index Slot Port Address Media Speed State Proto
============================================================
313 4 41 02b180 id 8G Online FC F-Port 10:00:00:90:fa:83:XX:X8
On the Brocade swicth, for port 4/41 (313) the events are like this:
2015/05/27-06:53:40:068142, [AN-1003], 8651/1717, SLOT 7 | FID 126, WARNING, SW6L1B, Latency bottleneck on F-Port 4/41. 0.33 pct. of 300 secs. affected. Avg. delay 49 us. Avg. slowdown 79., traf.c, line: 4459, comp:trafd, ltime:2015/05/27-06:53:40:059666
2015/05/27-06:53:41:688829, [MAPS-1002], 8652/1718, SLOT 7 | FID 126, ERROR, SW6L1B, Port 4/41, Condition=ALL_HOST_PORTS(C3TXTO/min>3), Current Value:[C3TXTO,35 Timeouts], RuleName=defALL_HOST_PORTSC3TXTO_3, Dashboard Category=Port Health., actionHndlr.c, line: 755, comp:raslog, ltime:2015/05/27-06:53:41:688766
2015/05/27-06:53:42:581100, [C2-1014], 8654/1719, SLOT 7 | CHASSIS, WARNING, BRZDCX_B, Link Reset on Port S4,P313(33) vc_no=0 crd(s)lost=80 auto trigger., OID:0x43428021, c2_ops.c, line: 5948, comp:insmod, ltime:2015/05/27-06:53:42:579502
2015/05/27-06:53:47:702532, [MAPS-1003], 8655/1720, SLOT 7 | FID 126, WARNING, SW6L1B, Port 4/41, Condition=NON_E_F_PORTS(LF/min>3), Current Value:[LF,43], RuleName=defNON_E_F_PORTSLF_3, Dashboard Category=Port Health., actionHndlr.c, line: 755, comp:raslog, ltime:2015/05/27-06:53:47:702464
2015/05/27-06:53:47:791413, [MAPS-1003], 8656/1721, SLOT 7 | FID 126, WARNING, SW6L1B, Port 4/41, Condition=ALL_HOST_PORTS(LF/min>3), Current Value:[LF,43], RuleName=defALL_HOST_PORTSLF_3, Dashboard Category=Port Health., actionHndlr.c, line: 755, comp:raslog, ltime:2015/05/27-06:53:47:791353
2015/05/27-06:54:05:675546, [MAPS-1003], 8657/1722, SLOT 7 | FID 126, WARNING, SW6L1B, Port 4/41, Condition=ALL_OTHER_F_PORTS(LF/min>3), Current Value:[LF,123], RuleName=defALL_OTHER_F_PORTSLF_3, Dashboard Category=Port Health., actionHndlr.c, line: 755, comp:raslog, ltime:2015/05/27-06:54:05:675480
2015/05/27-06:54:11:404890, [AN-1005], 8658/1723, SLOT 7 | FID 126, INFO, SW6L1B, Port 4/41 has Latency bottleneck cleared., traf.c, line: 4462, comp:trafd, ltime:2015/05/27-06:54:11:404830
Bottleneck detection detects latency towards the device attached to the port. MAPS throws up some events, and as well recovery resets the link, and finally the bottleneck detector announces latency bottleneck is cleared.
Similar happens once more here:
2015/06/06-00:37:48:447372, [AN-1003], 9472/1821, SLOT 7 | FID 126, WARNING, SW6L1B, Latency bottleneck on F-Port 4/41. 0.33 pct. of 300 secs. affected. Avg. delay 135 us. Avg. slowdown 462962., traf.c, line: 4459, comp:trafd, ltime:2015/06/06-00:37:48:447313
2015/06/06-00:37:52:228560, [MAPS-1003], 9473/1822, SLOT 7 | FID 126, WARNING, SW6L1B, Port 4/41, Condition=NON_E_F_PORTS(LF/min>3), Current Value:[LF,19], RuleName=defNON_E_F_PORTSLF_3, Dashboard Category=Port Health., actionHndlr .c, line: 755, comp:raslog, ltime:2015/06/06-00:37:52:228493
2015/06/06-00:37:52:288286, [MAPS-1003], 9474/1823, SLOT 7 | FID 126, WARNING, SW6L1B, Port 4/41, Condition=ALL_HOST_PORTS(LF/min>3), Current Value:[LF,19], RuleName=defALL_HOST_PORTSLF_3, Dashboard Category=Port Health., actionHnd lr.c, line: 755, comp:raslog, ltime:2015/06/06-00:37:52:288224
2015/06/06-00:38:10:194272, [MAPS-1003], 9475/1824, SLOT 7 | FID 126, WARNING, SW6L1B, Port 4/41, Condition=ALL_OTHER_F_PORTS(LF/min>3), Current Value:[LF,110], RuleName=defALL_OTHER_F_PORTSLF_3, Dashboard Category=Port Health., ac tionHndlr.c, line: 755, comp:raslog, ltime:2015/06/06-00:38:10:194204
2015/06/06-00:38:17:741522, [AN-1005], 9476/1825, SLOT 7 | FID 126, INFO, SW6L1B, Port 4/41 has Latency bottleneck cleared., traf.c, line: 4462, comp:trafd, ltime:2015/06/06-00:38:17:741462
There are no backend link incident coincident with events on this port.
Confirmation that there was a latency bottleneck on BRZDCX_B-10.217.130.89-10000005339BEAFF at ~1700 on the 16th.
SW8D0B-S7cp-201506170848.RAS_POST.txt:2015/06/16-17:00:28:701960, [AN-1003], 11541/2093, SLOT 7 | FID 126, WARNING, SW6L1B, Latency bottleneck on F-Port 4/41. 0.33 pct. of 300 secs. affected. Avg. delay 17 us. Avg. slowdown 798872., traf.c, line: 4459, comp:trafd, ltime:2015/06/16-17:00:28:697809
SW8D0B-S7cp-201506170848.RAS_POST.txt:2015/06/16-17:00:57:011433, [AN-1005], 11548/2098, SLOT 7 | FID 126, INFO, SW6L1B, Port 4/41 has Latency bottleneck cleared., traf.c, line: 4462, comp:trafd, ltime:2015/06/16-17:00:57:005571
Confirmation that there was a latency bottleneck on BRZDCX_A-10.217.130.34-10000005339BC6FF at ~1700 on the 16th.
SW8D0A-S6cp-201506170910.RAS_POST.txt:154 AUDIT, 2015/06/16-17:00:27 (CEST), [AN-1003], WARNING, FABRIC, NONE/root/NONE/None/CLI, ad_0/SW6L1A/FID 126, , Latency bottleneck on port 4/41. 0.00 pct. of 300 secs. affected. Avg. delay 0 us. Avg. slowdown 0.
SW8D0A-S6cp-201506170910.RAS_POST.txt:155 AUDIT, 2015/06/16-17:00:56 (CEST), [AN-1005], INFO, FABRIC, NONE/root/NONE/None/CLI, ad_0/SW6L1A/FID 126, , Slot 4, port 41 has Latency bottleneck cleared.
Looking at porterrstats for the 17th Jun with respect to the times when the statistics were last cleared.
For fabric B
Found file:./SW8D0B-S7cp-201506170836.SSHOW_SYS.txt
Also relative to
Thu Jun 11 07:50:34 2015 admin, 10.217.25.190, statsclear
frames enc crc crc too too bad enc disc link loss loss frjt fbsy c3timeout pcs
tx rx in err g_eof shrt long eof out c3 fail sync sig tx rx err
313: 171.5m 2.7g 0 0 0 0 0 0 0 77 102 5 5 0 0 75 0 0
For fabric A
SW8D0A-S6cp-201506170858.SSHOW_SYS.txt:Thu Jun 11 07:54:55 2015 admin, 10.217.25.190, statsclear
Found file:./SW8D0A-S6cp-201506170858.SSHOW_SYS.txt
CURRENT CONTEXT -- 2 , 126
frames enc crc crc too too bad enc disc link loss loss frjt fbsy c3timeout pcs
tx rx in err g_eof shrt long eof out c3 fail sync sig tx rx err
313: 1.3g 4.1g 0 0 0 0 0 0 0 129 100 1 3 0 0 114 0 0
So the c3 timeouts are consistent with the latency bottleneck messages which indicate that on two separate fabrics,
and the obvious cause is the link fails recorded since the respective stats clears.
As I had stated before, the nature of the events and statistics does not indicate a media/optic issue -
and the first suspect is the FC HBA Dual port taking both the port 313 links offline at the same time,
but the FC HBA was replaced and problem persist.
Also the "c3timeout rx" values on the 313 ports for each fabric (as per stats above) are 0,
which means that there is no congestion in the direction TSM ===> Falconstor,
otherwise the resulting backpressure would be indicated here.
It being zero means that no frame was stuck here on its way to the Falconstor (nor any other destination) from this source ID.
Also, since it is illogical/improbable that two events in two different fabrics would cause the TSM server to take down both paths at once,
I would suggest that the common point that could cause this is the Falconstor storage or the single HBA on the TSM side.
On the masterlog came first this messages:
Bottleneck detected due to latency at port : 313 Út VI 16 2015 17:00:27 CEST
Latency bottleneck on port 4/41. 0.00 pct. of 300 secs. affected. Avg. delay 0 us. Avg. slowdown 0. Út VI 16 2015 17:00:27 CEST
7.3.1a, , , , , , Frame timeout detected, tx port -1 rx port -1, sid 12c5c0, did 102b180, timestamp 2015-06-16 17:00:27. Út VI 16 2015 17:00:27 CEST
7.2.1c, , , , , , Latency bottleneck on port 4/41. 0.00 pct. of 300 secs. affected. Avg. delay 0 us. Avg. slowdown 0. Út VI 16 2015 17:00:27 CEST
Bottleneck detected due to 0/1 at port : 1 Út VI 16 2015 17:00:27 CEST
Bottleneck detected due to -1/-1 at port : 16 Út VI 16 2015 17:00:27 CEST
Latency bottleneck on E-Port 1. 0.00 pct. of 300 secs. affected. Avg. delay 0 us. Avg. slowdown 0. Út VI 16 2015 17:00:27 CEST
Latency bottleneck on E-Port 2/0. 0.00 pct. of 300 secs. affected. Avg. delay 0 us. Avg. slowdown 0. Út VI 16 2015 17:00:27 CEST
Severe latency bottleneck detected at slot 2 port 0. Út VI 16 2015 17:00:27 CEST
Bottleneck detected due to -1/-1 at port : 313 Út VI 16 2015 17:00:28 CEST
Latency bottleneck on F-Port 4/41. 0.33 pct. of 300 secs. affected. Avg. delay 17 us. Avg. slowdown 798872. Út VI 16 2015 17:00:28 CEST
And after 3 sec
SW6L1B Port 313 (4/41) changed its operational state to offline Út VI 16 2015 17:00:30 CEST
SW6L1A Port 313 (4/41) changed its operational state to offline Út VI 16 2015 17:00:30 CEST
This shows first is bottleneck and after that is offline/online state.
================================
Attachments
This solution has no attachment