Asset ID: |
1-72-2134410.1 |
Update Date: | 2018-03-07 |
Keywords: | |
Solution Type
Problem Resolution Sure
Solution
2134410.1
:
CNA FCoE Universal Emulex HBA link down - ERROR: 530: Mailbox timeout. (HEARTBEAT: Nowait.) on both ports
Related Items |
- Solaris x64/x86 Operating System
- SPARC T5-2
|
Related Categories |
- PLA-Support>Sun Systems>DISK>HBA>SN-DK: FC HBA
|
In this Document
Created from <SR 3-12388323231>
Applies to:
SPARC T5-2 - Version All Versions and later
Solaris x64/x86 Operating System - Version 10 3/05 and later
Information in this document applies to any platform.
Symptoms
There was a CISCO NEXUS switch reboot, this triggered the problem.
Due to that, the CNA/FCoE port connected to the switch had a link down,
and with no other reason some seconds later, the CNA card reported an error, making both ports of the CNA card down.
This is a T5-2 Solaris 11.3 SRU 4.5.0 server with two CNA (10GB ethernet) accessing to an EMC array:
C# INST# PORT WWN MODEL FCODE STATUS DEVICE PATH
-- ----- -------- ----- ----- ------ -----------
c11 emlxs4 100000XXXXXXX51a 7101684 (LPe16002-M6) 4.6.13.0CONNECTED /pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,2
c12 emlxs5 100000XXXXXXX51b 7101684 (LPe16002-M6) 4.6.13.0CONNECTED /pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,3
c16 emlxs6 100000XXXXXXX74c 7101684 (LPe16002-M6) 4.03a1 CONNECTED /pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,2
c17 emlxs7 100000XXXXXXX74d 7101684 (LPe16002-M6) 4.03a1 CONNECTED /pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,3
c11 = emlxs4 (fp32) -> /devices/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,2/fp@0,0:devctl
================================================================================
Port_ID Port WWN Device Description Type
------- -------- ------------------ ----
770040 50060164xxxxxxxx -> Clariion Array (Disk device)
770080 5006016cxxxxxxxx -> Clariion Array (Disk device)
7700c0 100000XXXXXXX51a -> Emulex HBA (Unknown Type,Host Bus Adapter) <<<-- Connected to Switch DID 77
c12 = emlxs5 (fp33) -> /devices/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,3/fp@0,0:devctl
================================================================================
Port_ID Port WWN Device Description Type
------- -------- ------------------ ----
b70040 50060165xxxxxxxx -> Clariion Array (Disk device)
b70060 5006016dxxxxxxxx -> Clariion Array (Disk device)
b70080 100000XXXXXXX51b -> Emulex HBA (Unknown Type,Host Bus Adapter) <<<-- Connected to Switch DID b7
c16 = emlxs6 (fp24) -> /devices/pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,2/fp@0,0:devctl
================================================================================
Port_ID Port WWN Device Description Type
------- -------- ------------------ ----
770040 50060164xxxxxxxx -> Clariion Array (Disk device)
770080 5006016cxxxxxxxx -> Clariion Array (Disk device)
770140 100000XXXXXXX74c -> Emulex HBA (Unknown Type,Host Bus Adapter) <<<-- Connected to Switch DID 77
c17 = emlxs7 (fp26) -> /devices/pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,3/fp@0,0:devctl
================================================================================
Port_ID Port WWN Device Description Type
------- -------- ------------------ ----
b70040 50060165xxxxxxxx -> Clariion Array (Disk device)
b70060 5006016dxxxxxxxx -> Clariion Array (Disk device)
b700c0 100000XXXXXXX74d -> Emulex HBA (Unknown Type,Host Bus Adapter) <<<-- Connected to Switch DID b7
Only 2 LUNs from EMC mapped to this server:
4. c0t6006016020703900F87BED5EF1C4E511d0 <DGC-VRAID-0533-120.00GB>
/scsi_vhci/ssd@g6006016020703900f87bed5ef1c4e511
5. c0t6006016020703900893F4643F1C4E511d0 <DGC-VRAID-0533-120.00GB>
/scsi_vhci/ssd@g6006016020703900893f4643f1c4e511
Each LUN has 8 paths under mpxio:
bash-4.1$ more luxadm_display_5006016088604fda.out
DEVICE PROPERTIES for disk: 5006016088604fda
Vendor: DGC
Product ID: VRAID
Revision: 0533
Serial Num: CKM00143102050
Unformatted capacity: 122880.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x0
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c0t6006016020703900F87BED5EF1C4E511d0s2
/devices/scsi_vhci/ssd@g6006016020703900f87bed5ef1c4e511:c,raw
Controller /devices/pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,3/fp@0,0
Device Address 50060165xxxxxxxx,1
Host controller port WWN 100000XXXXXXX74d
Class primary
State ONLINE
Controller /devices/pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,3/fp@0,0
Device Address 5006016dxxxxxxxx,1
Host controller port WWN 100000XXXXXXX74d
Class secondary
State ONLINE
Controller /devices/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,3/fp@0,0
Device Address 50060165xxxxxxxx,1
Host controller port WWN 100000XXXXXXX51b
Class primary
State ONLINE
Controller /devices/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,3/fp@0,0
Device Address 5006016dxxxxxxxx,1
Host controller port WWN 100000XXXXXXX51b
Class secondary
State ONLINE
Controller /devices/pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,2/fp@0,0
Device Address 50060164xxxxxxxx,1
Host controller port WWN 100000XXXXXXX74c
Class primary
State ONLINE
Controller /devices/pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,2/fp@0,0
Device Address 5006016cxxxxxxxx,1
Host controller port WWN 100000XXXXXXX74c
Class secondary
State ONLINE
Controller /devices/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,2/fp@0,0
Device Address 5006016cxxxxxxxx,1
Host controller port WWN 100000XXXXXXX51a
Class secondary
State ONLINE
Controller /devices/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,2/fp@0,0
Device Address 50060164xxxxxxxx,1
Host controller port WWN 100000XXXXXXX51a
Class primary
State ONLINE
We can see the following fma faults, related to this issue:
root@server01 # fmadm faulty
--------------- ------------------------------------ -------------- ---------
TIME EVENT-ID MSG-ID SEVERITY
--------------- ------------------------------------ -------------- ---------
Mar 18 16:14:24 ae27335c-591b-4d43-b819-bf900d73121d PCIEX-8000-0A Critical
Problem Status : open
Diag Engine : eft / 1.16
System
Manufacturer : Oracle Corporation
Name : SPARC T5-2
Part_Number : 32707504+1+1
Serial_Number : AK00212448
Host_ID : 8656f2b6
----------------------------------------
Suspect 1 of 1 :
Problem class : fault.io.pciex.device-interr
Certainty : 100%
Affects : dev:////pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,3
Status : faulted but still in service
FRU
Status : faulty
Location : "/SYS/PCIE1"
Manufacturer : unknown
Name : unknown
Part_Number : unknown
Revision : unknown
Serial_Number : unknown
Chassis
Manufacturer : Oracle Corporation
Name : SPARC T5-2
Part_Number : 32707504+1+1
Serial_Number : AK00212448
Description : A problem was detected for a PCIEX device.
Response : One or more device instances may be disabled
Impact : Loss of services provided by the device instances associated with
this fault
Action : Use 'fmadm faulty' to provide a more detailed view of this event.
Please refer to the associated reference document at
http://support.oracle.com/msg/PCIEX-8000-0A for the latest
service procedures and policies regarding this diagnosis.
--------------- ------------------------------------ -------------- ---------
TIME EVENT-ID MSG-ID SEVERITY
--------------- ------------------------------------ -------------- ---------
Mar 18 15:58:04 efb91f74-2495-4c12-afaf-e4f6e0426bfe PCIEX-8000-0A Critical
Problem Status : open
Diag Engine : eft / 1.16
System
Manufacturer : Oracle Corporation
Name : SPARC T5-2
Part_Number : 32707504+1+1
Serial_Number : AK00212448
Host_ID : 8656f2b6
----------------------------------------
Suspect 1 of 1 :
Problem class : fault.io.pciex.device-interr
Certainty : 100%
Affects : dev:////pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,2
Status : faulted but still in service
FRU
Status : faulty
Location : "/SYS/PCIE8"
Manufacturer : unknown
Name : unknown
Part_Number : unknown
Revision : unknown
Serial_Number : unknown
Chassis
Manufacturer : Oracle Corporation
Name : SPARC T5-2
Part_Number : 32707504+1+1
Serial_Number : AK00212448
Description : A problem was detected for a PCIEX device.
Response : One or more device instances may be disabled
Impact : Loss of services provided by the device instances associated with
this fault
Action : Use 'fmadm faulty' to provide a more detailed view of this event.
Please refer to the associated reference document at
http://support.oracle.com/msg/PCIEX-8000-0A for the latest
service procedures and policies regarding this diagnosis.
The following errors are seeing when the problem occurred:
1) When the Cisco nexus switch DID 77 was rebooted,
we see Link down on emlxs4 and emlxs6 at "Mar 18 15:56:43" (one emlxs instance from each CNA card) ,
and 51 secs later, with no other reason, we get "ERROR: 530: Mailbox timeout. (HEARTBEAT: Nowait.)" on BOTH ports of CNA HBA (emlxs7 and emlxs6)
Mar 18 15:56:43 server01 emlxs: [ID 349649 kern.info] [ 5.0401]emlxs4: NOTICE: 710: Link down.
Mar 18 15:56:43 server01 mac: [ID 486395 kern.info] NOTICE: oce4 link down
Mar 18 15:56:43 server01 in.mpathd[94]: [ID 215189 daemon.error] The link has gone down on net1191008
Mar 18 15:56:43 server01 in.mpathd[94]: [ID 968981 daemon.error] IP interface failure detected on net1191008 of group ipmp0
Mar 18 15:56:43 server01 emlxs: [ID 349649 kern.info] [ 5.0401]emlxs6: NOTICE: 710: Link down.
Mar 18 15:57:34 server01 emlxs: [ID 349649 kern.info] [14.23A6]emlxs7: ERROR: 530: Mailbox timeout. (HEARTBEAT: Nowait.)
Mar 18 15:57:34 server01 emlxs: [ID 349649 kern.info] [14.23A6]emlxs6: ERROR: 530: Mailbox timeout. (HEARTBEAT: Nowait.)
Mar 18 15:57:34 server01 emlxs: [ID 349649 kern.info] [ 5.0401]emlxs7: NOTICE: 710: Link down.
Mar 18 15:57:40 server01 oce: [ID 881338 kern.warning] WARNING: oce[7]: UE Detected or FW Dump is requested SLIPORT_ERR1 = 9f000013 and SLIPORT_ERR2 = 2001002
Mar 18 15:57:41 server01 oce: [ID 881338 kern.warning] WARNING: oce[6]: UE Detected or FW Dump is requested SLIPORT_ERR1 = 9f000013 and SLIPORT_ERR2 = 2001002
Mar 18 15:57:43 server01 emlxs: [ID 349649 kern.info] [ 6.0994]emlxs6:WARNING: 231: Adapter shutdown. (Reboot required.)
Mar 18 15:57:43 server01 emlxs: [ID 349649 kern.info] [ 6.0994]emlxs7:WARNING: 231: Adapter shutdown. (Reboot required.)
Mar 18 15:57:49 server01 oce: [ID 428118 kern.warning] WARNING: oce[7]: No response from FW for Mailbox command. Mailbox stalled till recovery
Mar 18 15:57:49 server01 oce: [ID 190917 kern.warning] WARNING: oce[7]: Failed to get vport stats:254
Mar 18 15:57:49 server01 last message repeated 5 times
Mar 18 15:57:49 server01 mac: [ID 435574 kern.info] NOTICE: oce7 link up, 10000 Mbps, full duplex
Mar 18 15:57:49 server01 oce: [ID 190917 kern.warning] WARNING: oce[7]: Failed to get vport stats:254
Mar 18 15:57:49 server01 last message repeated 258 times
Mar 18 15:57:49 server01 mac: [ID 435574 kern.info] NOTICE: net1191011 link up, 10000 Mbps, unknown duplex
Mar 18 15:57:49 server01 oce: [ID 190917 kern.warning] WARNING: oce[7]: Failed to get vport stats:254
Mar 18 15:57:49 server01 last message repeated 10 times
2) As a result we got the FMA related error on the failed CNA
Mar 18 15:58:04 server01 genunix: [ID 408114 kern.info] /pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,3 (emlxs7) down
Mar 18 15:58:04 server01 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: PCIEX-8000-0A, TYPE: Fault, VER: 1, SEVERITY: Critical
Mar 18 15:58:04 server01 EVENT-TIME: Fri Mar 18 15:58:04 CET 2016
Mar 18 15:58:04 server01 PLATFORM: SPARC T5-2, CSN: AK00212448, HOSTNAME: server01
Mar 18 15:58:04 server01 SOURCE: eft, REV: 1.16
Mar 18 15:58:04 server01 EVENT-ID: 7e9fbbdf-b011-4d68-a40c-8ede2a84595b
Mar 18 15:58:04 server01 DESC: A problem was detected for a PCIEX device.
Mar 18 15:58:04 server01 AUTO-RESPONSE: One or more device instances may be disabled
Mar 18 15:58:04 server01 IMPACT: Loss of services provided by the device instances associated with this fault
Mar 18 15:58:04 server01 REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. Please refer to the associated reference document at http://support.oracle.com/msg/PCIEX-8000-0A for the latest service procedures and policies regarding this diagnosis.
Mar 18 15:58:04 server01 genunix: [ID 408114 kern.info] /pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,2 (emlxs6) down
Mar 18 15:58:04 server01 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: PCIEX-8000-0A, TYPE: Fault, VER: 1, SEVERITY: Critical
Mar 18 15:58:04 server01 EVENT-TIME: Fri Mar 18 15:58:04 CET 2016
Mar 18 15:58:04 server01 PLATFORM: SPARC T5-2, CSN: AK00212448, HOSTNAME: server01
Mar 18 15:58:04 server01 SOURCE: eft, REV: 1.16
Mar 18 15:58:04 server01 EVENT-ID: efb91f74-2495-4c12-afaf-e4f6e0426bfe
Mar 18 15:58:04 server01 DESC: A problem was detected for a PCIEX device.
Mar 18 15:58:04 server01 AUTO-RESPONSE: One or more device instances may be disabled
Mar 18 15:58:04 server01 IMPACT: Loss of services provided by the device instances associated with this fault
Mar 18 15:58:04 server01 REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. Please refer to the associated reference document at http://support.oracle.com/msg/PCIEX-8000-0A for the latest service procedures and policies regarding this diagnosis.
Mar 18 15:58:04 server01 SC Alert: [ID 640425 daemon.alert] Fault | critical: Fault detected at time = Fri Mar 18 15:58:04 2016. The suspect component: /SYS/MB/PCIE8 has fault.io.pciex.device-interr with probability=100. Refer to http://support.oracle.com/msg/PCIEX-8000-0A for details.
Mar 18 15:58:04 server01 last message repeated 1 time
Mar 18 15:58:06 server01 genunix: [ID 846333 kern.warning] WARNING: constraints forbid retire: /pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,3
Mar 18 15:58:06 server01 genunix: [ID 846333 kern.warning] WARNING: constraints forbid retire: /pci@3c0/pci@1/pci@0/pci@7/SUNW,emlxs@0,2
3) 90 secs later fp associated instance to the failed CNA ports goes OFFLINE as expected
Mar 18 15:58:13 server01 fctl: [ID 517869 kern.warning] WARNING: fp(32)::OFFLINE timeout
Mar 18 15:58:13 server01 fctl: [ID 517869 kern.warning] WARNING: fp(24)::OFFLINE timeout
4) Some minutes later when the Cisco switch is up, we see the link up on emlxs4, but not on emlxs6, as it was failed before.
Mar 18 16:01:57 server01 mac: [ID 435574 kern.info] NOTICE: oce4 link up, 10000 Mbps, full duplex
Mar 18 16:01:58 server01 mac: [ID 435574 kern.info] NOTICE: net1191008 link up, 10000 Mbps, unknown duplex
Mar 18 16:01:58 server01 in.mpathd[94]: [ID 820239 daemon.error] The link has come up on net1191008
Mar 18 16:01:58 server01 mac: [ID 486395 kern.info] NOTICE: oce4 link down
Mar 18 16:01:58 server01 mac: [ID 486395 kern.info] NOTICE: net1191008 link down
Mar 18 16:01:58 server01 in.mpathd[94]: [ID 215189 daemon.error] The link has gone down on net1191008
Mar 18 16:01:58 server01 in.mpathd[94]: [ID 968981 daemon.error] IP interface failure detected on net1191008 of group ipmp0
Mar 18 16:01:58 server01 mac: [ID 435574 kern.info] NOTICE: oce4 link up, 10000 Mbps, full duplex
Mar 18 16:01:58 server01 mac: [ID 435574 kern.info] NOTICE: net1191008 link up, 10000 Mbps, unknown duplex
Mar 18 16:01:58 server01 in.mpathd[94]: [ID 820239 daemon.error] The link has come up on net1191008
Mar 18 16:02:06 server01 emlxs: [ID 349649 kern.info] [ 5.063F]emlxs4: NOTICE: 720: Link up. (10Gb, fabric, initiator)
Mar 18 16:02:11 server01 emlxs: [ID 349649 kern.info] [ 5.0401]emlxs4: NOTICE: 710: Link down.
Mar 18 16:02:16 server01 emlxs: [ID 349649 kern.info] [ 5.063F]emlxs4: NOTICE: 720: Link up. (10Gb, fabric, initiator)
Mar 18 16:02:16 server01 fp: [ID 517869 kern.warning] WARNING: fp(32): N_x Port with D_ID=770080, PWWN=5006016cxxxxxxxx reappeared in fabric
Mar 18 16:02:16 server01 fp: [ID 517869 kern.warning] WARNING: fp(32): N_x Port with D_ID=770040, PWWN=50060164xxxxxxxx reappeared in fabric
Mar 18 16:02:21 server01 genunix: [ID 530209 kern.info] /scsi_vhci/ssd@g6006016020703900f87bed5ef1c4e511 (ssd12) multipath status: optimal: path 8 fp32/ssd@w50060164xxxxxxxx,1 is online: Load balancing: round-robin
Mar 18 16:02:21 server01 genunix: [ID 530209 kern.info] /scsi_vhci/ssd@g6006016020703900893f4643f1c4e511 (ssd13) multipath status: optimal: path 9 fp32/ssd@w50060164xxxxxxxx,0 is online: Load balancing: round-robin
Mar 18 16:02:21 server01 genunix: [ID 530209 kern.info] /scsi_vhci/ssd@g6006016020703900f87bed5ef1c4e511 (ssd12) multipath status: optimal: path 6 fp32/ssd@w5006016cxxxxxxxx,1 is online: Load balancing: round-robin
Mar 18 16:02:21 server01 genunix: [ID 530209 kern.info] /scsi_vhci/ssd@g6006016020703900893f4643f1c4e511 (ssd13) multipath status: optimal: path 7 fp32/ssd@w5006016cxxxxxxxx,0 is online: Load balancing: round-robin
Cause
A new Bug was opened to address this issue:
Bug 23068055 - FCoE link down - ERROR: 530: Mailbox timeout. (HEARTBEAT: Nowait.) on both ports
It was found an Old firmware version on the CNA card
bash-4.1$ more fcinfo.out
HBA Port WWN: 100000xxxxxxx74d
Port Mode: Initiator
Port ID: b700c0
OS Device Name: /dev/cfg/c17
Manufacturer: Emulex
Model: 7101684
Firmware Version: 7101684 1.1.43.8 <<<----- OLD!!!
FCode/BIOS Version: Boot:1.1.43.8 Fcode:4.03a1
Serial Number: 4925382+14170000K9
Driver Name: emlxs
Driver Version: 3.0.05.0 (2015.09.16.13.00)
Type: N-port
State: online
Supported Speeds: 10Gb
Current Speed: 10Gb
Node WWN: 200000xxxxxxx74d
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
NPIV Not Supported
HBA Port WWN: 100000xxxxxxx51b
Port Mode: Initiator
Port ID: b70080
OS Device Name: /dev/cfg/c12
Manufacturer: Emulex
Model: 7101684
Firmware Version: 7101684 1.1.43.8
FCode/BIOS Version: Boot:1.1.43.8 Fcode:4.03a1
Serial Number: 4925382+14170000I7
Driver Name: emlxs
Driver Version: 3.0.05.0 (2015.09.16.13.00)
Type: N-port
State: online
Supported Speeds: 10Gb
Current Speed: 10Gb
Node WWN: 200000xxxxxxx51b
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
NPIV Not Supported
HBA Port WWN: 100000xxxxxxx74c
Port Mode: Initiator
Port ID: 770140
OS Device Name: /dev/cfg/c16
Manufacturer: Emulex
Model: 7101684
Firmware Version: 7101684 1.1.43.8
FCode/BIOS Version: Boot:1.1.43.8 Fcode:4.03a1
Serial Number: 4925382+14170000K9
Driver Name: emlxs
Driver Version: 3.0.05.0 (2015.09.16.13.00)
Type: N-port
State: online
Supported Speeds: 10Gb
Current Speed: 10Gb
Node WWN: 200000xxxxxxx74c
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
NPIV Not Supported
HBA Port WWN: 100000xxxxxxx51a
Port Mode: Initiator
Port ID: 7700c0
OS Device Name: /dev/cfg/c11
Manufacturer: Emulex
Model: 7101684
Firmware Version: 7101684 1.1.43.8
FCode/BIOS Version: Boot:1.1.43.8 Fcode:4.03a1
Serial Number: 4925382+14170000I7
Driver Name: emlxs
Driver Version: 3.0.05.0 (2015.09.16.13.00)
Type: N-port
State: online
Supported Speeds: 10Gb
Current Speed: 10Gb
Node WWN: 200000xxxxxxx51a
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
NPIV Not Supported
bash-4.1$
Solution
Install Oracle 7101684 7023036 Emulex LPe16002-M6-O Firmware Version 11.1.218.17. After that, problem cannot be reproduced.
Instructions to download the firmware and to install into the CNA card can be found on the solution section of this other doc:
emlxs WARNING: 424: Adapter warning. (SLI Port Async Event: Physical media not detected) (Doc ID 2119504.1)
NOTE: A full power cycle reboot of the server is required to make the flash firmware version become the operational firmware version.
In some servers it may be possible to power off/on the HBA with hotplug command , instead of power cycling the whole server as explained on this other doc:
SFP Not Working In 16Gb FC HBA Emulex Card - WARNING: 424: Adapter warning. (SLI Port Async Event: Unsupported physical media detected) (Doc ID 1670748.1)
Bug 23068055 points you to firmware version 10.6.230.0, but hopefully the issue is fixed for real this time in firmware version 11.1.218.17.
References
<NOTE:1955822.1> - Solaris 11.2 (and later) FC HBA - Update Firmware, FCode/BIOS (ie. Boot Code)
<NOTE:1389639.1> - FAQ Oracle FC HBA: FCode/BIOS(ie. Boot Code), Firmware, and Drivers
<NOTE:2119504.1> - emlxs WARNING: 424: Adapter warning. (SLI Port Async Event: Physical media not detected) - FC HBA Firmware Upgrade Example
<BUG:23068055> - FCOE LINK DOWN - ERROR: 530: MAILBOX TIMEOUT. ON BOTH PORTS
Attachments
This solution has no attachment