Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-2313224.1
Update Date:2018-05-14
Keywords:

Solution Type  Technical Instruction Sure

Solution  2313224.1 :   Oracle ZFS Storage Appliance: How to upgrade a clustered system running 2013.1.6.x Release or Later  


Related Items
  • Sun ZFS Storage 7420
  •  
  • Oracle ZFS Storage ZS5-2
  •  
  • Oracle ZFS Storage ZS3-2
  •  
  • Oracle ZFS Storage ZS4-4
  •  
  • Oracle ZFS Storage Appliance Racked System ZS5-4
  •  
  • Oracle ZFS Storage ZS5-4
  •  
  • Oracle ZFS Storage Appliance Racked System ZS5-2
  •  
  • Oracle ZFS Storage ZS3-4
  •  
  • Sun ZFS Storage 7320
  •  
  • Oracle ZFS Storage Appliance Racked System ZS4-4
  •  
  • Oracle ZFS Storage ZS3-BA
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: ZS
  •  




In this Document
Goal
Solution
 Preparing for a Software Upgrade
 Upgrading clustered controllers
References


Applies to:

Oracle ZFS Storage ZS5-2 - Version All Versions and later
Oracle ZFS Storage Appliance Racked System ZS5-2 - Version All Versions and later
Oracle ZFS Storage ZS4-4 - Version All Versions and later
Sun ZFS Storage 7320 - Version All Versions and later
Oracle ZFS Storage Appliance Racked System ZS4-4 - Version All Versions and later
7000 Appliance OS (Fishworks)

Goal

This document is to provide a guidance for doing a cluster update the correct way.

There are two separate cluster configurations and processes :

Active-Active Cluster
Active-Passive Cluster

Clustered nodes must be upgraded one right after the other.

Important requirement: Before starting the maintenance window, make sure the gzipped firmware tarball has been downloaded and saved on a local host, available to the NAS box in ftp or http.

Review the Release Notes carefully for every version between installed version and version you are going to update to.

Warning: Clustered nodes CANNOT be left at two different code versions longer than 8 hours. Systems left at two different code versions can corrupt data and cause an outage for days in order to repair.

 

Solution

Preparing for a Software Upgrade


Before you upgrade the software, perform the following actions for clustered controllers.

During the update process, some protocols may experience an outage. See related topics for more information.

1.  Verify your current software version.

In the BUI, go to Maintenance > System.

In the CLI, go to maintenance system updates and enter show

2.  Remove extraneous system updates.

3.  Check the most recent release notes for additional preconditions that should be observed for the software release.

If skipping some software releases, also review the release notes from all applicable previous releases.

See My Oracle Support document Oracle ZFS Storage Appliance: Software Updates (2021771.1)

4. Disable non-critical data services. These services may include replication, NDMP, shadow migration, or others.

Disabling these services can shorten the upgrade time, and ensures that the system has a minimal operation load during the update.

5. Create a backup copy of the management configuration to minimize downtime in the event of an unforeseen failure.

6. Ensure that any resilvering and scrub operations have completed.

In the BUI, go to Configuration > Storage and check the STATUS column next to each pool.

In the CLI, go to configuration storage, enter set pool=    and the name of the pool you want to check, and then enter show

The scrub property indicates whether scrub or resilver operations are active or completed.

7. Ensure that there are no active problems.

In the BUI, go to Maintenance > Problems.

In the CLI, go to maintenance problems show.

8. Perform a health check.

A health check is automatically run as part of the update process, but should also be run independently to check storage health prior to entering a maintenance window.

9. Schedule a maintenance window of at least one hour to allow for disruptions in storage performance and availability during update

 

NOTE: With the 2013.1.6.0 (AK8.6.0) or later releases, we can update a CLUSTERED or OWNER node (ie. the controller owns active resources)

Disk firmware updates will also run while heads are in the AKCS_CLUSTERED state.

Disk firmware upgrades on 2013.1.6.0 (AK8.6.0) or later releases:

To upgrade a given disk:

- The disk must be present
- Expander upgrades must not be in progress
- If the disk is on a shared chassis (i.e. disk shelf)
- Cluster peers must be at the same software revision
- If the disk is unused, upgrade can only start on either the head in owner state, or the head in clustered state with the highest ASN.

The main side effect of this is that we now only upgrade once both heads are on the new code.

If there are pending upgrades for drives (and IOM's) neither will complete until the partner head is also upgraded.

They will instead report -

Pending
Component                Current Version    Status
Disk <unknown> HDD 11    A1CA               peer version mismatch
Disk <unknown> HDD 16    A1CA               peer version mismatch

 

 

Note - For the purpose of this procedure, the first controller to be upgraded is referred to as controller A and its peer is controller B.

If one of the controllers is in a Stripped state (it has no active resources), upgrade that controller first to avoid availability delays.

If both controllers in a cluster have active resources, choose either controller to upgrade first.

 

 

Upgrading clustered controllers

Action 1Upload the appliance firmware onto each head of the clustered system.

Action 2 : Log in to controller A, go to Maintenance > System, and click the arrow icon next to the name of the update you want to install.

Action 3 : (Optional) Click CHECK to perform health checks.

For information about health checks, see MOS Doc ID 1904850.1 (Oracle ZFS Storage Appliance: Perform Health Check Manually before Firmware Upgrade)

Action 4 : Click APPLY to begin the update process.

Action 5 : Wait for controller A to fully reboot, and log back in to controller A.

Action 6 : Go to Configuration > Cluster and verify that controller A is in the "Ready (waiting for failback)" state.

Action 7 : (Optional) To monitor firmware updates, go to Maintenance > System and check the update counter.

Action 8 : Log in to controller B and go to Configuration > Cluster to verify that controller B is in the "Active (takeover completed)" state.

Action 9 : Go to Configuration > Cluster, and click FAILBACK to change the cluster to an Active/Active configuration.

Note - This is not necessary if you want an Active/Passive configuration.

Action 10 : Go to Maintenance > System, and click the arrow icon next to the name of the update you want to install.

Action 11 : (Optional) Click CHECK to perform health checks.

For information about health checks, see MOS Doc ID 1904850.1 (Oracle ZFS Storage Appliance: Perform Health Check Manually before Firmware Upgrade)

Action 12 : Click APPLY to begin the update process.

Action 13 : Wait for controller B to fully reboot, and then log back in to controller B.

Action 14 : Go to Configuration > Cluster to verify that controller B is in the "Ready (waiting for failback)" state.

Action 15 : (Optional) To monitor firmware updates, go to Maintenance > System and check the update counter.

Action 16 : Log in to controller A and go to Configuration > Cluster to verify that controller A is in the "Active (takeover completed)" state.

Action 17 : To verify that all firmware updates are complete, go to Maintenance > System and check the update counter.

Important NoteDo not begin the next step until all firmware updates are complete.

Action 18 : Go to Configuration > Cluster and click FAILBACK to change the cluster to an Active/Active configuration.

Note - This is not necessary if you want an Active/Passive configuration.

 

Both controllers are now upgraded.

 

Action 19 : Go to Maintenance > Hardware to verify that all disks are online.

All lights should be green.

Action 20 : Verify there are no controller and disk shelf component errors.

All lights should be green. An amber light indicates a component error.

Action 21 : If any components have errors, check for pool errors by going to Configuration > Storage, and check the STATUS and ERRORS columns for each pool.

Pools should be online and have no errors.

Action 22 : Log in to controller B and repeat steps 19-21 for controller B.

Action 23 : Enable any data services that were disabled before the upgrade.

 

 


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback