Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-72-1951800.1
Update Date:2018-01-05
Keywords:

Solution Type  Problem Resolution Sure

Solution  1951800.1 :   Oracle ZFS Storage Appliance: System stuck at "Joining Cluster" after creating a shadow migration  


Related Items
  • Sun ZFS Storage 7320
  •  
  • Oracle ZFS Storage ZS3-BA
  •  
  • Oracle ZFS Storage Appliance Racked System ZS4-4
  •  
  • Oracle ZFS Storage ZS3-2
  •  
  • Oracle ZFS Storage ZS3-4
  •  
  • Sun Storage 7410 Unified Storage System
  •  
  • Sun ZFS Storage 7420
  •  
  • Sun Storage 7310 Unified Storage System
  •  
  • Oracle ZFS Storage ZS4-4
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
  •  




In this Document
Symptoms
Changes
Cause
Solution


Created from <SR 3-9800813601>

Applies to:

Sun ZFS Storage 7320 - Version All Versions to All Versions [Release All Releases]
Sun ZFS Storage 7420 - Version All Versions to All Versions [Release All Releases]
Oracle ZFS Storage ZS3-2 - Version All Versions to All Versions [Release All Releases]
Oracle ZFS Storage ZS3-4 - Version All Versions to All Versions [Release All Releases]
Sun Storage 7310 Unified Storage System - Version All Versions to All Versions [Release All Releases]
7000 Appliance OS (Fishworks)

Symptoms

The system stuck at "Joining Cluster..." when only one of the node is booted.

    Sun ZFS Storage 7320 Configuration
    Copyright (c) 2008, 2014, Oracle and/or its affiliates. All rights reserved.

    NET-0 <=>  NET-1 <X>  NET-2 <X>  NET-3 <X>


    This appliance is part of an appliance cluster.
    Please wait while cluster synchronization takes place.

    Joining cluster ...


    ESC-3: Halt   ESC-4: Reboot   ESC-5: Info

    For help, see http://www.oracle.com/goto/zfs7320/

 

Changes

Recently created a share with shadow migration on a local share.

 

Cause

The NFS option was used instead of Local while setting up a local share shadow migration.

 

Solution

Please contact Oracle Support to help resolve the issue.

 

Boot the system with milestone none.

Check the rm.ak log, it is stopped after pools import and the pool02 shadow migration import.

Wed Dec 3 01:30:48 2014: import of ak:/replication/pool02 succeeded in 0.034s
Wed Dec 3 01:30:48 2014: import of ak:/replication/pool01 succeeded in 0.029s
Wed Dec 3 01:30:48 2014: import of ak:/replication/pool00 succeeded in 0.068s
Wed Dec 3 01:30:48 2014: import of ak:/shadow/pool02 succeeded in 0.025s


No more logs beyond the last shadow migration import.

Boot to milestone all. As we will need to check the pools for shadow migration.

bash-4.1# svcadm milestone all

 

Check for any shares that contain shadow migration.

bash-4.1# cd /tmp
bash-4.1# zfs get all > zfs-get_all.out &

bash-4.1# grep shadow zfs-get_all.out | egrep -v none | more
pool02/local/stage/test                                                         shadow                           nfs://10.0.0.1/export/test01                                                        -

And verify that the shadow migration source nfs://10.0.0.1/export/test01 is in fact local in pool02.

 

To remove the shadow migration manually.

zfs set shadow=none pool02/local/stage/test

After the shadow migration is removed, reboot the system.

 


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback