![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||||||
Solution Type Problem Resolution Sure Solution 1951800.1 : Oracle ZFS Storage Appliance: System stuck at "Joining Cluster" after creating a shadow migration
In this Document
Created from <SR 3-9800813601> Applies to:Sun ZFS Storage 7320 - Version All Versions to All Versions [Release All Releases]Sun ZFS Storage 7420 - Version All Versions to All Versions [Release All Releases] Oracle ZFS Storage ZS3-2 - Version All Versions to All Versions [Release All Releases] Oracle ZFS Storage ZS3-4 - Version All Versions to All Versions [Release All Releases] Sun Storage 7310 Unified Storage System - Version All Versions to All Versions [Release All Releases] 7000 Appliance OS (Fishworks) SymptomsThe system stuck at "Joining Cluster..." when only one of the node is booted. Sun ZFS Storage 7320 Configuration
Copyright (c) 2008, 2014, Oracle and/or its affiliates. All rights reserved. NET-0 <=> NET-1 <X> NET-2 <X> NET-3 <X> This appliance is part of an appliance cluster. Please wait while cluster synchronization takes place. Joining cluster ... ESC-3: Halt ESC-4: Reboot ESC-5: Info For help, see http://www.oracle.com/goto/zfs7320/
ChangesRecently created a share with shadow migration on a local share.
CauseThe NFS option was used instead of Local while setting up a local share shadow migration.
SolutionPlease contact Oracle Support to help resolve the issue.
Boot the system with milestone none. Check the rm.ak log, it is stopped after pools import and the pool02 shadow migration import. Wed Dec 3 01:30:48 2014: import of ak:/replication/pool02 succeeded in 0.034s
Wed Dec 3 01:30:48 2014: import of ak:/replication/pool01 succeeded in 0.029s Wed Dec 3 01:30:48 2014: import of ak:/replication/pool00 succeeded in 0.068s Wed Dec 3 01:30:48 2014: import of ak:/shadow/pool02 succeeded in 0.025s
Boot to milestone all. As we will need to check the pools for shadow migration. bash-4.1# svcadm milestone all
Check for any shares that contain shadow migration. bash-4.1# cd /tmp bash-4.1# grep shadow zfs-get_all.out | egrep -v none | more And verify that the shadow migration source nfs://10.0.0.1/export/test01 is in fact local in pool02.
To remove the shadow migration manually. zfs set shadow=none pool02/local/stage/test
After the shadow migration is removed, reboot the system.
Attachments This solution has no attachment |
||||||||||||||||||
|