Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-72-2090712.1
Update Date:2017-02-06
Keywords:

Solution Type  Problem Resolution Sure

Solution  2090712.1 :   Oracle ZFS Storage Appliance: How to Recover a LUN with an Unavailable GUID after a Reboot or Upgrade.  


Related Items
  • Sun ZFS Storage 7320
  •  
  • Oracle ZFS Storage ZS5-4
  •  
  • Oracle ZFS Storage ZS3-BA
  •  
  • Sun Storage 7210 Unified Storage System
  •  
  • Oracle ZFS Storage Appliance Racked System ZS4-4
  •  
  • Oracle ZFS Storage ZS3-2
  •  
  • Oracle ZFS Storage ZS3-4
  •  
  • Sun Storage 7410 Unified Storage System
  •  
  • Oracle ZFS Storage ZS5-2
  •  
  • Sun ZFS Storage 7420
  •  
  • Sun Storage 7310 Unified Storage System
  •  
  • Oracle ZFS Storage ZS4-4
  •  
  • Sun ZFS Storage 7120
  •  
  • Sun Storage 7110 Unified Storage System
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
  •  




In this Document
Symptoms
Cause
Solution
References


Created from <SR 3-11795544413>

Applies to:

Oracle ZFS Storage ZS3-2 - Version All Versions to All Versions [Release All Releases]
Oracle ZFS Storage ZS3-4 - Version All Versions to All Versions [Release All Releases]
Oracle ZFS Storage ZS3-BA - Version All Versions to All Versions [Release All Releases]
Sun ZFS Storage 7320 - Version All Versions to All Versions [Release All Releases]
Sun ZFS Storage 7120 - Version All Versions to All Versions [Release All Releases]
7000 Appliance OS (Fishworks)

Symptoms

One LUN missing after the ZFS Storage Appliance was upgraded.

Cause

For some reason two LUNs had ended up with the same LUN number while being configured for the same iSCSI target.

From a bundle collected prior to the upgrade we could see that the both LUNs were visible with the same LUN number, but in different target groups according to the stmf framework - The AKD configuration showed the LUNs in the same target group though.

How this occurred, and when, we can only speculate on - as we cannot find any traces of when this change seems to have occurred.

It is this configuration anomaly which caused the issue after reboot. That it happened to be an upgrade reboot is just circumstantial.

 

Issue confirmed to be caused by the mismatching views of stmf and the appliance software before the upgrade/reboot.

The only reason it worked before was that stmf had exported one LUN to a different target-group.

This difference was not reflected in the appliance configuration and the upgrade reboot made the stmf framework aware, and the last LUN to be mounted (as LUN 9, in this case) failed.

 

Solution

In the CLI (I haven't found a way to do this in the BUI) do the following:


Go to the LUN with issues:

shares select myProject select troubled-Lun

 

Verify LUN guid is unknown:

get lunguid
  lunguid =

 

Check the assigned LUN number:

get assignednumber
  assignednumber = 9


As LUN number 9 is occupied by LUN "not-troubled-lun", we need to reset that LUN number to something else.

The target group in question has currently 10 occupied LUNs. So the suggestion is to set the assigned number to 11 as this is the first free LUN number in the range:

set assignednumber=11


But to be able to export the LUN with this LUN number, we first need to stop if from being exported.

set exported=false
commit


Now re-export it:

set export=true
commit


And now the LUN should have a GUID and be visible from the client side.

 

Please note that there may be some client side procedures needed to rescan the LUNs and if the LUN number is used to determine the drive instead of the GUID, some config changes may be needed for the LUN to appear "in the correct place".

 

References

<NOTE:1947661.1> - Oracle ZFS Storage Appliance: NDMP - Unknown GUID during LUN Restore

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback