Asset ID: |
1-71-2085700.1 |
Update Date: | 2018-05-11 |
Keywords: | |
Solution Type
Technical Instruction Sure
Solution
2085700.1
:
Oracle ZFS Storage Appliance: Upgrading from Release 2013.1.4.8 (and earlier) to 2013.1.4.9 (and later) May Require Additional Time for Hardware Updates
Related Items |
- Sun ZFS Storage 7420
- Oracle ZFS Storage ZS5-2
- Oracle ZFS Storage ZS3-2
- Oracle ZFS Storage ZS4-4
- Oracle ZFS Storage Appliance Racked System ZS5-4
- Oracle ZFS Storage ZS5-4
- Sun ZFS Storage 7120
- Oracle ZFS Storage ZS3-4
- Oracle ZFS Storage Appliance Racked System ZS5-2
- Sun ZFS Storage 7320
- Oracle ZFS Storage Appliance Racked System ZS4-4
- Oracle ZFS Storage ZS3-BA
|
Related Categories |
- PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
|
In this Document
Applies to:
Sun ZFS Storage 7120 - Version All Versions and later
Sun ZFS Storage 7320 - Version All Versions and later
Sun ZFS Storage 7420 - Version All Versions and later
Oracle ZFS Storage ZS3-2 - Version All Versions and later
Oracle ZFS Storage ZS4-4 - Version All Versions and later
7000 Appliance OS (Fishworks)
Goal
Oracle regularly provides updates for its ZFS appliance software. New code may be downloaded via Doc ID 2021771.1. For release 2013.1.4.9 (and later), extra attention should be given to the upgrade process.
This new release will include a significant amount of disk drive firmware updates. Administrators need to allocate extra time for these updates to complete. This document provides a working example of how to monitor these updates and what to look for.
WARNING: Any Appliance with Hitachi 600GB disk drives (HUS156060VLS600 HUS1560SCSUN600G , must be upgraded to 2013.06.05.5.6, or later.
Solution
The upgrade process remains exactly the same as it has always been (Reference Doc ID 1447284.1). The caveat to the upgrade comes at the final steps in the process.
- On a clustered system, you must leave the cluster in a AKCS_OWNER/AKCS_STRIPPED state until the jbod controller updates (aka SIMs or IOMs) complete. (You can view the cluster state in the CLI via Configuration -> Cluster -> Show)
- On an Unclustered system, you can simply monitor the SIM/IOM Updates until completion.
- Once the SIM/IOM updates complete, you can return the cluster to an AKCS_CLUSTERED state (CLI:> Configuration -> Cluster -> Failback).
- Disk Drives will be upgraded after all SIM/IOM upgrades complete.
- Disk Drive updates will occur in either an OWNER/STRIPPED state or CLUSTERED state.
- The disk must be present (Not Failed), and both nodes must be on the same upgraded revision before disk upgrades start.
- The disks will be upgraded by the node which owns the pool the disk resides in. If the disk is not in a pool, it will be upgraded by the node with the higher ASN.
Both the BUI and the CLI allow for monitoring of Firmware Updates.
In the following example, we see a ZS3-2 upgraded to 2013.1.4.10. Once the ZS3-2 is booted to the new code, 54 Firmware Updates needed to run.
This view is from Maintenance -> System of the BUI. Periodically refresh the view to see the Firmware Updates count down to 0.

By Selecting the "i" next to the Updates Remaining you can get more verbose output.

You can also watch the Firmware Updates in the CLI. Once again, we start off with 54 Pending Updates
ZFS-NODE1 :maintenance system updates> show
Updates:
UPDATE DATE STATUS
ak-nas@2013.06.05.4.10,1-1.1 2015-11-3 21:07:29 current
ak-nas@2013.06.05.4.4,1-1.1 2015-7-20 22:45:35 previous
Deferred updates:
This software update contains deferred updates that enable features which are
incompatible with earlier software versions. As these updates cannot be
reverted once committed, and peer system resources are updated across a
cluster, verifying first that the upgraded system is functioning properly
before applying deferred updates is advised.
1. Support for ndmp-zfs replica backup
Hardware Updates: 54 remaining
Depending on your version of code, there will be a new option to monitor firmware updates. Check for firmware or firmwareUpdatesDetails in the context menu Maintenance -> System -> Updates. This will show you all Pending, Failed and In Progress updates.
ZFS-NODE1 :maintenance system updates> firmwareUpdatesDetails
Pending
Component Current Version Status
Disk <unknown> HDD 0 A31A undefined cannot offline disk from pool system
Disk <unknown> HDD 2 A310 undefined cannot offline disk from pool pool-de2-24c3t
Disk <unknown> HDD 14 A1B2 undefined cannot offline disk from pool pool-de2-24c4t
Disk <unknown> HDD 18 A310 undefined cannot offline disk from pool pool-de2-24c3t
Disk <unknown> HDD 0 A1B2 undefined cannot offline disk from pool pool-de2-24c4t
Disk <unknown> HDD 11 A1B2 undefined cannot offline disk from pool pool-de2-24c4t
Disk <unknown> HDD 7 A1B2 undefined cannot offline disk from pool pool-de2-24c4t
Disk <unknown> HDD 0 A310 undefined cannot offline disk from pool pool-de2-24c3t
Disk <unknown> HDD 17 A310 undefined cannot offline disk from pool pool-de2-24c3t
Disk <unknown> HDD 13 A310 undefined cannot offline disk from pool pool-de2-24c3t
Disk <unknown> HDD 9 A1B2 undefined cannot offline disk from pool pool-de2-24c4t
Disk <unknown> HDD 15 A1B2 undefined cannot offline disk from pool pool-de2-24c4t
Disk <unknown> HDD 4 A310 undefined cannot offline disk from pool pool-de2-24c3t
Disk <unknown> HDD 9 A310 undefined cannot offline disk from pool pool-de2-24c3t
Disk <unknown> HDD 3 A310 undefined cannot offline disk from pool pool-de2-24c3t
Disk <unknown> HDD 18 A1B2 undefined cannot offline disk from pool pool-de2-24c4t
Disk <unknown> HDD 12 A1B2 undefined cannot offline disk from pool pool-de2-24c4t
Disk <unknown> HDD 4 A1B2 undefined cannot offline disk from pool pool-de2-24c4t
Disk <unknown> HDD 14 A310 undefined cannot offline disk from pool pool-de2-24c3t
Failed
No Failed Updates
In Progress
Component Current Version Status
Disk <unknown> HDD 1 A31A Fri Dec 04 2015 15:23:32 GMT+0000 (UTC) Update started
Disk <unknown> HDD 16 A310 Fri Dec 04 2015 15:25:14 GMT+0000 (UTC) Update started
Disk <unknown> HDD 10 A1B2 Fri Dec 04 2015 15:25:22 GMT+0000 (UTC) Update started
Disk <unknown> HDD 6 A1B2 Fri Dec 04 2015 15:25:33 GMT+0000 (UTC) Update started
Disk <unknown> HDD 1 A310 Fri Dec 04 2015 15:25:55 GMT+0000 (UTC) Update started
Disk <unknown> HDD 6 A310 Fri Dec 04 2015 15:26:05 GMT+0000 (UTC) Update started
Disk <unknown> HDD 1 A1B2 Fri Dec 04 2015 15:26:13 GMT+0000 (UTC) Update started
Disk <unknown> HDD 17 A1CA Fri Dec 04 2015 15:26:23 GMT+0000 (UTC) Update started
Disk <unknown> HDD 11 A310 Fri Dec 04 2015 15:26:35 GMT+0000 (UTC) Update started
Disk <unknown> HDD 15 A310 Fri Dec 04 2015 15:26:43 GMT+0000 (UTC) Update started
Disk <unknown> HDD 8 A310 Fri Dec 04 2015 15:26:51 GMT+0000 (UTC) Update started
Disk <unknown> HDD 13 A1B2 Fri Dec 04 2015 15:27:00 GMT+0000 (UTC) Update started
Disk <unknown> HDD 5 A1B2 Fri Dec 04 2015 15:27:21 GMT+0000 (UTC) Update started
Disk <unknown> HDD 19 A1B2 Fri Dec 04 2015 15:27:33 GMT+0000 (UTC) Update started
Disk <unknown> HDD 2 A1B2 Fri Dec 04 2015 15:27:44 GMT+0000 (UTC) Update started
Disk <unknown> HDD 8 A1B2 Fri Dec 04 2015 15:27:56 GMT+0000 (UTC) Update started
Disk <unknown> HDD 5 A310 Fri Dec 04 2015 15:28:17 GMT+0000 (UTC) Update started
Disk <unknown> HDD 12 A310 Fri Dec 04 2015 15:28:28 GMT+0000 (UTC) Update started
Disk <unknown> HDD 10 A310 Fri Dec 04 2015 15:28:38 GMT+0000 (UTC) Update started
Disk <unknown> HDD 7 A310 Fri Dec 04 2015 15:28:46 GMT+0000 (UTC) Update started
Disk <unknown> HDD 16 A1B2 Fri Dec 04 2015 15:28:56 GMT+0000 (UTC) Update started
Disk <unknown> HDD 19 A310 Fri Dec 04 2015 15:29:07 GMT+0000 (UTC) Update started
Disk <unknown> HDD 3 A1B2 Fri Dec 04 2015 15:29:15 GMT+0000 (UTC) Update started
When the Firmware Updates Complete, "Firmware Updates" will disappear from the BUI. The CLI will report no Updates.
ZFS NODE1:> maintenance system updates firmwareUpdatesDetails
Pending
No Pending Updates
Failed
No Failed Updates
In Progress
No Updates in Progress
If the Firmware Updates do not complete, contact Oracle Support to assist with next steps.
From the shell we can see the effects of the upgrades on the pool.
- If more than one pool exists, they will be done at the same time.
- There are 3 steps. A disk is offlined, upgraded, and deltas are resilvered when onlined.
- The maximum number of disks that the vdev allows for are offlined "per vdev".
- i.e. 1 disk per vdev for raidz1, 2 disks per vdev for raidz2, 1 disk per mirrored vdev, etc.
- Bellow is an example of a mirrored vdev being upgraded.
You can watch the parallel updates start and finish with aklog. For example..
ZFS NODE1 # aklog rm| grep "upgrade of disk" | more
Fri Dec 4 15:25:15 2015: [disk update] beginning upgrade of disk hc://:chassis-mfg=Oracle-Corporation:chassis-name=ORACLE-
DE2-24C:chassis-part=32151846+1+1:chassis-serial=1338NMT006:fru-serial=001336RVBE8K--------YVKVBE8K:fru-part=HITACHI-H7230A
S60SUN3.0T:fru-revision=A310:devid=id1,sd@n5000cca03ed90c3c/ses-enclosure=1/bay=16/disk=0 (c0t5000CCA03ED90C3Cd0)
Fri Dec 4 15:25:23 2015: [disk update] beginning upgrade of disk hc://:chassis-mfg=Oracle-Corporation:chassis-name=ORACLE-
DE2-24C:chassis-part=31926877+1+1:chassis-serial=1334FMT00G:fru-serial=001330EAH7NX--------PAKAH7NX:fru-part=HGST-H7240AS60
SUN4.0T:fru-revision=A1B2:devid=id1,sd@n5000cca024bc2dd4/ses-enclosure=0/bay=10/disk=0 (c0t5000CCA024BC2DD4d0)
Fri Dec 4 15:25:34 2015: [disk update] beginning upgrade of disk hc://:chassis-mfg=Oracle-Corporation:chassis-name=ORACLE-
DE2-24C:chassis-part=31926877+1+1:chassis-serial=1334FMT00G:fru-serial=001330EA8JKX--------PAKA8JKX:fru-part=HGST-H7240AS60
SUN4.0T:fru-revision=A1B2:devid=id1,sd@n5000cca024bbc908/ses-enclosure=0/bay=6/disk=0 (c0t5000CCA024BBC908d0)
Fri Dec 4 15:25:55 2015: [disk update] beginning upgrade of disk hc://:chassis-mfg=Oracle-Corporation:chassis-name=ORACLE-
DE2-24C:chassis-part=32151846+1+1:chassis-serial=1338NMT006:fru-serial=001331RPSKSK--------YVKPSKSK:fru-part=HITACHI-H7230A
S60SUN3.0T:fru-revision=A310:devid=id1,sd@n5000cca03ed0ab2c/ses-enclosure=1/bay=1/disk=0 (c0t5000CCA03ED0AB2Cd0)
Fri Dec 4 15:26:05 2015: [disk update] beginning upgrade of disk hc://:chassis-mfg=Oracle-Corporation:chassis-name=ORACLE-
DE2-24C:chassis-part=32151846+1+1:chassis-serial=1338NMT006:fru-serial=001336RVGSHK--------YVKVGSHK:fru-part=HITACHI-H7230A
S60SUN3.0T:fru-revision=A310:devid=id1,sd@n5000cca03ed93e3c/ses-enclosure=1/bay=6/disk=0 (c0t5000CCA03ED93E3Cd0)
Fri Dec 4 15:26:13 2015: [disk update] beginning upgrade of disk hc://:chassis-mfg=Oracle-Corporation:chassis-name=ORACLE-
DE2-24C:chassis-part=31926877+1+1:chassis-serial=1334FMT00G:fru-serial=001330EAMU6X--------PAKAMU6X:fru-part=HGST-H7240AS60
SUN4.0T:fru-revision=A1B2:devid=id1,sd@n5000cca024bc7264/ses-enclosure=0/bay=1/disk=0 (c0t5000CCA024BC7264d0)
............
To watch a specific drive, you can grep on the S/N
ZFS NODE1 # aklog rm | grep "upgrade of disk" | grep 001330EAG98X
Fri Dec 4 15:35:08 2015: [disk update] beginning upgrade of disk hc://:chassis-mfg=Oracle-Corporation:chassis-name=ORACLE-DE2-24C:chassis-part=31926877+1+1:chassis-serial=1334FMT00G:fru serial=001330EAG98X--------PAKAG98X:fru-part=HGST-H7240AS60SUN4.0T:fru-revision=A1B2:devid=id1,sd@n5000cca024bc1f98/ses-enclosure=0/bay=18/disk=0 (c0t5000CCA024BC1F98d0)
Fri Dec 4 15:38:24 2015: [disk update] upgrade of disk hc://:chassis-mfg=Oracle-Corporation:chassis-name=ORACLE-DE2-24C:chassis-part=31926877+1+1:chassis-serial=1334FMT00G:fru-serial=001330EAG98X--------PAKAG98X:fru-part=HGST-H7240AS60SUN4.0T:fru-revision=A1B2:devid=id1,sd@n5000cca024bc1f98/ses-enclosure=0/bay=18/disk=0 verified: device online
Here is the pool, also showing simultaneous updates.
pool: pool-de2-24c3t
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 0 in 0h0m with 0 errors on Sat Nov 21 13:16:00 2015
config:
NAME STATE READ WRITE CKSUM
pool-de2-24c3t DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c0t5000CCA03E812030d0 ONLINE 0 0 0
c0t5000CCA03ED0AB2Cd0 OFFLINE 0 0 0
mirror-1 DEGRADED 0 0 0
c0t5000CCA03ED0B6A4d0 OFFLINE 0 0 0
c0t5000CCA03ED4E90Cd0 ONLINE 0 0 0
mirror-2 DEGRADED 0 0 0
c0t5000CCA03ED8B0D8d0 ONLINE 0 0 0
c0t5000CCA03ED8B368d0 OFFLINE 0 0 0
mirror-3 DEGRADED 0 0 0
c0t5000CCA03ED8BB04d0 OFFLINE 0 0 0
c0t5000CCA03ED09D64d0 ONLINE 0 0 0
mirror-4 DEGRADED 0 0 0
c0t5000CCA03ED9FB0Cd0 ONLINE 0 0 0
c0t5000CCA03ED90C3Cd0 OFFLINE 0 0 0
mirror-5 DEGRADED 0 0 0
c0t5000CCA03ED93E3Cd0 OFFLINE 0 0 0
c0t5000CCA03ED93EACd0 ONLINE 0 0 0
mirror-6 DEGRADED 0 0 0
c0t5000CCA03ED98B9Cd0 ONLINE 0 0 0
c0t5000CCA03ED98D50d0 OFFLINE 0 0 0
mirror-7 DEGRADED 0 0 0
c0t5000CCA03ED99FF0d0 OFFLINE 0 0 0
c0t5000CCA03ED967F4d0 ONLINE 0 0 0
mirror-8 DEGRADED 0 0 0
c0t5000CCA03ED990C0d0 ONLINE 0 0 0
c0t5000CCA03ED991D0d0 OFFLINE 0 0 0
spares
c0t5000CCA03ED03334d0 AVAIL
c0t5000CCA03ED9416Cd0 AVAIL
Check for relevancy - 11-May-2018
Attachments
This solution has no attachment