![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||||
Solution Type Technical Instruction Sure Solution 1611395.1 : How to Recover a Solaris zpool After a Disaster Recovery of All Corresponding Volumes on a Sun Storage 2500 or 6000 Array
In this Document
Applies to:Sun Storage 6180 Array - Version All Versions and laterSun Storage 6780 Array - Version All Versions and later Sun Storage Flexline 380 Array - Version Not Applicable to Not Applicable [Release N/A] Sun Storage 2540 Array - Version Not Applicable and later Sun Storage 6580 Array - Version All Versions and later Information in this document applies to any platform. GoalIn order to triage, diagnose and repair a catastrophic failure on a Sun Storage 2500 and 6000 array, Oracle Support may reset the array to factory defaults. Once repairs are complete, the data is then recovered by one of several disaster recovery techniques. This document provides the steps to continue the recovery of any zpools and ZFS filesystems associated with the recovered array. SolutionIn the following example, there are 5, 100GB volumes created from a 2540 storage array. The volumes are then mapped to a Solaris server. # /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume zfs_01
Volume: zfs_01 WWN: 60:0A:0B:80:00:2F:BC:5D:00:00:1A:F5:52:C1:BD:2A Size: 100.000 GB State: Mapped Status: Online # /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume zfs_02 Volume: zfs_02 WWN: 60:0A:0B:80:00:2F:BC:67:00:00:1B:0D:52:C1:31:F3 Size: 100.000 GB State: Mapped Status: Online # /opt/SUNWstkcam/bin/sscslist -a 2540-fc volume zfs_03 Volume: zfs_03 WWN: 60:0A:0B:80:00:2F:BC:5D:00:00:1A:F2:52:C1:BC:98 Size: 100.000 GB State: Mapped Status: Online # /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume zfs_04 Volume: zfs_04 WWN: 60:0A:0B:80:00:2F:BC:67:00:00:1B:0C:52:C1:31:7D Size: 100.000 GB State: Mapped Status: Online # /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume zfs_05 Volume: zfs_05 WWN: 60:0A:0B:80:00:2F:BC:5D:00:00:1A:F4:52:C1:BD:14 Size: 100.000 GB State: Mapped Status: Online
The volumes are discovered in format and subsequently striped together into a non-redundant RAID-0 zpool. A database is loaded onto the pool. # echo | format 97. c7t600A0B80002FBC5D00001AF252C1BC98d0 <SUN-LCSM100_F-0670-100.00GB> zfs_03 /scsi_vhci/ssd@g600a0b80002fbc5d00001af252c1bc98 98. c7t600A0B80002FBC5D00001AF452C1BD14d0 <SUN-LCSM100_F-0670-100.00GB> zfs_05 /scsi_vhci/ssd@g600a0b80002fbc5d00001af452c1bd14 99. c7t600A0B80002FBC5D00001AF552C1BD2Ad0 <SUN-LCSM100_F-0670-100.00GB> zfs_01 /scsi_vhci/ssd@g600a0b80002fbc5d00001af552c1bd2a 100. c7t600A0B80002FBC6700001B0C52C1317Dd0 <SUN-LCSM100_F-0670-100.00GB> zfs_04 /scsi_vhci/ssd@g600a0b80002fbc6700001b0c52c1317d 101. c7t600A0B80002FBC6700001B0D52C131F3d0 <SUN-LCSM100_F-0670-100.00GB> zfs_02 /scsi_vhci/ssd@g600a0b80002fbc6700001b0d52c131f3 # zpool create datavol c7t600A0B80002FBC5D00001AF252C1BC98d0 c7t600A0B80002FBC5D00001AF452C1BD14d0 c7t600A0B80002FBC5D00001AF552C1BD2Ad0 c7t600A0B80002FBC6700001B0C52C1317Dd0 c7t600A0B80002FBC6700001B0D52C131F3d0 # zpool list (upon completion of data load) NAME SIZE ALLOC FREE CAP HEALTH ALTROOT datavol 498G 121G 377G 24% ONLINE - # zpool status pool: datavol state: ONLINE NAME STATE READ WRITE CKSUM datavol ONLINE 0 0 0 c7t600A0B80002FBC5D00001AF252C1BC98d0 ONLINE 0 0 0 c7t600A0B80002FBC5D00001AF452C1BD14d0 ONLINE 0 0 0 c7t600A0B80002FBC5D00001AF552C1BD2Ad0 ONLINE 0 0 0 c7t600A0B80002FBC6700001B0C52C1317Dd0 ONLINE 0 0 0 c7t600A0B80002FBC6700001B0D52C131F3d0 ONLINE 0 0 0 # df -k /datavol Filesystem kbytes used avail capacity Mounted on datavol 513515520 127034239 386480781 25% /datavol # ls -l /datavol total 277599805 -rw-r--r-- 1 root root 130056978432 Dec 31 13:19 database
In the event of a catastrophic failure, all I/O stops and the zpool faults. # zpool status
pool: datavol state: UNAVAIL status: One or more devices are faulted in response to IO failures. NAME STATE READ WRITE CKSUM datavol UNAVAIL 0 0 0 insufficient replicas c7t600A0B80002FBC5D00001AF252C1BC98d0 UNAVAIL 0 0 0 experienced I/O failures c7t600A0B80002FBC5D00001AF452C1BD14d0 UNAVAIL 0 0 0 experienced I/O failures c7t600A0B80002FBC5D00001AF552C1BD2Ad0 UNAVAIL 0 0 0 cannot open c7t600A0B80002FBC6700001B0C52C1317Dd0 UNAVAIL 0 0 0 cannot open c7t600A0B80002FBC6700001B0D52C131F3d0 UNAVAIL 0 0 0 cannot open
Repair procedures include a reset of the array to factory defaults. Upon completion of the repairs, the Oracle Support Engineer will use one of several techniques to restore the metadata. Typically, the recovery technique used for the volumes results in a new World Wide Number (WWN) for the volume. Since the WWN is used to create the device name, the device name has changed in Solaris. Oracle Internal Only There are techniques to restore the previous WWN (World Wide Number). These are not covered in this document.
# echo | format
97. c7t600A0B80002FBC5D00001AF852C30FCAd0 <SUN-LCSM100_F-0670-100.00GB> zfs_03 /scsi_vhci/ssd@g600a0b80002fbc5d00001af852c30fca 98. c7t600A0B80002FBC5D00001AFA52C3101Ed0 <SUN-LCSM100_F-0670-100.00GB> zfs_05 /scsi_vhci/ssd@g600a0b80002fbc5d00001afa52c3101e 99. c7t600A0B80002FBC5D00001AFB52C310AAd0 <SUN-LCSM100_F-0670-100.00GB> zfs_01 /scsi_vhci/ssd@g600a0b80002fbc5d00001afb52c310aa 100. c7t600A0B80002FBC6700001B1052C284A9d0 <SUN-LCSM100_F-0670-100.00GB> zfs_04 /scsi_vhci/ssd@g600a0b80002fbc6700001b1052c284a9 101. c7t600A0B80002FBC6700001B1152C28591d0 <SUN-LCSM100_F-0670-100.00GB> zfs_02 /scsi_vhci/ssd@g600a0b80002fbc6700001b1152c28591
# zpool export datavol
# zpool import datavol # zpool status datavol pool: datavol state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datavol ONLINE c7t600A0B80002FBC5D00001AF852C30FCAd0 ONLINE c7t600A0B80002FBC5D00001AFA52C3101Ed0 ONLINE c7t600A0B80002FBC5D00001AFB52C310AAd0 ONLINE c7t600A0B80002FBC6700001B1052C284A9d0 ONLINE c7t600A0B80002FBC6700001B1152C28591d0 ONLINE # df -k /datavol Filesystem kbytes used avail capacity Mounted on datavol 513515520 127034239 386480777 25% /datavol # ls -l /datavol total 254068435 -rw-r--r-- 1 root root 130056978432 Dec 31 13:19 database
# rm /etc/zfs/zpool.cache
# reboot # zpool import pool: datavol id: 16114106878260192746 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: datavol ONLINE c7t600A0B80002FBC5D00001AF852C30FCAd0 ONLINE c7t600A0B80002FBC5D00001AFA52C3101Ed0 ONLINE c7t600A0B80002FBC5D00001AFB52C310AAd0 ONLINE c7t600A0B80002FBC6700001B1052C284A9d0 ONLINE c7t600A0B80002FBC6700001B1152C28591d0 ONLINE # zpool import datavol # zpool status datavol pool: datavol state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datavol ONLINE c7t600A0B80002FBC5D00001AF852C30FCAd0 ONLINE c7t600A0B80002FBC5D00001AFA52C3101Ed0 ONLINE c7t600A0B80002FBC5D00001AFB52C310AAd0 ONLINE c7t600A0B80002FBC6700001B1052C284A9d0 ONLINE c7t600A0B80002FBC6700001B1152C28591d0 ONLINE
Attachments This solution has no attachment |
||||||||||||||||
|