Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-72-1522624.1
Update Date:2017-01-05
Keywords:

Solution Type  Problem Resolution Sure

Solution  1522624.1 :   Sun StorageTek[TM] 5000 Series NAS Arrays: Following a server failure, mirrored volume is in "Ready" and "Breaking" state  


Related Items
  • Sun Storage 5220 NAS Appliance
  •  
  • Sun Storage 5210 NAS Appliance
  •  
  • Sun Storage 5310 NAS Appliance
  •  
  • Sun Storage 5320 NAS Appliance
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: SE5xxx NAS
  •  


Following a server failure - one of the mirrored volumes is in "Ready" and "Breaking" state

In this Document
Symptoms
Cause
Solution
References


Created from <SR 3-6407211941>

Applies to:

Sun Storage 5320 NAS Appliance - Version All Versions and later
Sun Storage 5310 NAS Appliance - Version All Versions and later
Sun Storage 5220 NAS Appliance - Version All Versions and later
Sun Storage 5210 NAS Appliance - Version All Versions and later
Information in this document applies to any platform.

Symptoms

Following a server failure (power outage) - one of the mirrored volumes is in "Ready" and "Breaking" state

jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_info.gif 11/01/12 11:27:03 smbd: TCP service listening on 445
jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_info.gif 11/01/12 11:27:03 smbd: NetBIOS service listening on 139
jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_info.gif 11/01/12 11:27:03 next = 486398710, write = 486398709, head = 485672103, sync = 485672103
jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_warn.gif 11/01/12 11:27:03 nmir: Mirror vol /server24 is cracked
jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_info.gif 11/01/12 11:27:00 nmir: /optspool: mirror link to CCS is restored
jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_info.gif 11/01/12 11:27:03 smbd: TCP service listening on 445
jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_info.gif 11/01/12 11:27:03 smbd: NetBIOS service listening on 139
jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_info.gif 11/01/12 11:27:03 next = 486398710, write = 486398709, head = 485672103, sync = 485672103
jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_warn.gif 11/01/12 11:27:03 nmir: Mirror vol /server24 is cracked
jar:http://192.168.222.11/lib/webadmin.jar!/com/sun/netstorage/nasmgmt/ui/framework/image/log_info.gif 11/01/12 11:27:00 nmir: /optspool: mirror link to CCS is restored

  ... the problem mirror is server24


/server24 is in different states on the two nodes:

  Mirror status on Mirror (target) NAS    -  server24 ITC CCS (local) Breaking

  Mirror status on Primary (source) NAS  -  server24 ITC (local) CCS Ready

Cause

After power outage 'server24' was left in this state all other mirrors synced up.



Solution

1st - solution offered -> restart mirror
----------------------------------------
A. A broken mirror always starts from the beginning any existing mirrored data will be over-ridden

B. To restart a mirror that has broken and cannot be resync perform the following steps

     1. Telnet to the target head
     2. Select option D. Disks and Volumes
     3. Select the Letter showing the Replicated volumes
     4. Delete the type of NBD do not remove any SFS2 types
     5. Perform the same function on all of the Volumes to re-sync
     6. Telnet to the Source Head
     7. On the telnet command line key is   xj clean <volname>
     8. Perform the xj clean command on all of the volumes to be re-sync
     9. Restart the replication one at a time
    10. If the above steps do not work attach the diagnostic email to the SR.
         (There may be a problem in the network or hardware issue)

     => did not fix the issue


2nd - solution offered -> delete volume
---------------------------------------
    1. Select  1. Edit
    2. Scroll down with the arrow keys to the volume to delete
    3. When the volume is selected, see the bottom of the screen  8. Delete

    NOTE: We were not allowed to delete the volume and recreate it due to the fact that compliance was activated on the volumes.

     => did not fix the issue


3rd - solution offered -> reboot node
-------------------------------------
Reboot the node (CCS) that showed 'server24' in breaking state.

     => issue resolved



Checked for currency - 04-AUG-2015

Checked for currency - 05-JAN-2017

References

<NOTE:1005474.1> - Sun StorageTek[TM] 5000 Series NAS: How to Collect data for troubleshooting

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback