Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-79-1919830.1
Update Date:2015-07-18
Keywords:

Solution Type  Predictive Self-Healing Sure

Solution  1919830.1 :   SuperCluster - 11.2.0.4 Grid Infrastructure and Database Upgrade for 11.2.0.3  


Related Items
  • Oracle SuperCluster M6-32 Hardware
  •  
  • Oracle SuperCluster T5-8 Hardware
  •  
  • Oracle Exadata Storage Server Software
  •  
  • Oracle Database - Enterprise Edition
  •  
  • SPARC SuperCluster T4-4
  •  
Related Categories
  • PLA-Support>Eng Systems>Exadata/ODA/SSC>Oracle Exadata>DB: Exadata_EST
  •  




In this Document
Purpose
Details
 Oracle SuperCluster Maintenance
 Overview
 Conventions
 Assumptions
 Oracle SuperCluster Supported Versions Documents
 Prepare the Existing Environment
 Plan the Upgrade
 Test on non-production first
 Preserve SQL Execution Plans with SQL Plan Management
 Develop and Test a Recovery Plan
 Obtain Required Account Access
 Review 11.2.0.4 Upgrade Prerequisites
 Generic requirements:
 Download Required Files
 Apply patches / updates where required before upgrading proceeds
 Update OPatch in existing 11.2 Grid Home and existing 11.2 Database Homes on All Database Servers
 Apply required patches / updates identified in the section  'Upgrade Prerequisites' 
 Run Exachk or HealthCheck
 Validate Readiness for Oracle Clusterware upgrade using CVU and Exachck
 Early stage pre-upgrade check: analyze your databases to be upgraded with the Pre-Upgrade Information Tool
 Install and Upgrade Grid Infrastructure to 11.2.0.4
 Create a snapshot based backup of the database server partitions
 Create the new Grid Infrastructure (GI_HOME) directory where 11.2.0.4 will be installed
 Prepare installation software
 Perform the 11.2.0.4 Grid Infrastructure software installation and upgrade using the Oracle Universal Installer (OUI)
 Apply the latest Bundle Patchset  and the workaround for item DB_10 in SuperCluster Critical Issues
 If available apply 11.2.0.4 Bundle Patch Overlay Patches to the Grid Infrastructure Home as specified in  SuperCluster Supported Versions for your hardware type
 Apply Customer-specific 11.2.0.4 One-Off Patches to the Grid Infrastructure Home
 Change SGA memory settings for ASM
 Verify values for memory_target and memory_max_target
 Change Custom Scripts and Environment Variables to Reference the 11.2.0.4 Grid Home
 Install Database 11.2.0.4 Software
 Prepare Installation Software
 Perform 11.2.0.4 Database Software Installation with the Oracle Universal Installer (OUI) 
 Update OPatch in the New Database Home on All Database Servers
 Install Latest 11.2.0.4 Bundle Patch for Oracle  Exadata Database Machine to the Database Home  - Do Not Perform Post-Installation Steps
 When available: Apply 11.2.0.4 Bundle Patch Overlay Patches to the Database Home as Specified in Document 888828.1
 Apply Customer-specific 11.2.0.4 One-Off Patches
 Upgrade Database to 11.2.0.4
 Backing up the database and creating a Guaranteed Restore Point
 Analyze the Database to Upgrade with the Pre-Upgrade Information Tool
 Run Pre-Upgrade Information Tool
 Handle obsolete and underscore parameters
 Review pre-upgrade information tool output
 Data Guard only - Synchronize Standby and Change the Standby Database to use the new 11.2.0.4 Database Home
 Flush all redo generated on the primary and disable the broker
 Data Guard only - Disable Fast-Start Failover and Data Guard Broker
 Shutdown the primary database.
 Shutdown the standby database and restart it in an 11.2.0.4 database home
 Start all primary instances in restricted mode
 Upgrade the Database with Database Upgrade Assistant (DBUA)
 Review and perform steps in Oracle Upgrade Guide, Chapter 4 'Post-Upgrade Tasks for Oracle Database'
 Change Custom Scripts and Environment Variables to Reference the 11.2.0.4 Database Home
 Underscore Initialization Parameters
 Perform 11.2.0.4 Bundle Patch for Oracle Exadata Database Machine Post-Installation Steps
 Data Guard only - Enable Fast-Start Failover and Data Guard Broker
 Post-upgrade Steps
 Remove Guaranteed Restore Point 
 DBFS only - Perform DBFS Required Updates
 Obtain latest mount-dbfs.sh script from Document 1054431.1
 Edit mount-dbfs.sh script and Oracle Net files for the new 11.2.0.4 environment
 Run Exachk
 Re-configure RMAN Media Management Library
 Troubleshooting
 Revision History
 Appendix
 Oracle Documentation  
 Patch matrix for 14639430 - patch requirements for rolling-back upgrades from 11.2.0.4 Linux and Solaris x86-64 
 Patch matrix for minimal required patches on top of 11.2.0.4 
 Community Discussions
References


Applies to:

Oracle SuperCluster M6-32 Hardware - Version All Versions to All Versions [Release All Releases]
Oracle Exadata Storage Server Software - Version 11.2.1.2.0 and later
Oracle Database - Enterprise Edition - Version 11.2.0.4 to 11.2.0.4 [Release 11.2]
SPARC SuperCluster T4-4 - Version All Versions to All Versions [Release All Releases]
Oracle SuperCluster T5-8 Hardware - Version All Versions to All Versions [Release All Releases]
Information in this document applies to any platform.

Purpose

This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure release 11.2.0.3 BP8 and later to 11.2.0.4 on Oracle SuperCluster.

Details

Oracle SuperCluster Maintenance

11.2.0.3 BP8 and later to 11.2.0.4 Upgrade

Overview

This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure release 11.2.0.3 BP8 and later to 11.2.0.4 on Oracle SuperCluster.

5 Steps to Upgrade to 11.2.0.4:

Section Overview
Prepare the Existing Environment
The software releases and patches installed in the current environment must be at certain minimum levels before upgrading to 11.2.0.4 can begin. Depending on the existing software installed, updates performed during this section may be performed in a rolling manner or may require database-wide downtime. In this section recommendations for storing base line execution plans will be done as well as the advice to make sure database restores can be done in case there is a need to rollback. The preparation phase will detail the required patches, where to download and stage them.
Install and Upgrade Grid Infrastructure to 11.2.0.4 Grid Infrastructure upgrades from  11.2.0.3 to 11.2.0.4 is always performed out of place and in a RAC rolling manner.
Install Database 11.2.0.4 Software Database 11.2.0.4 software installation is performed into a new ORACLE_HOME directory.  It is performed with no impact to running applications
Upgrade Database to 11.2.0.4
Database upgrades from 11.2.0.3 to 11.2.0.4 require database-wide downtime.

Rolling upgrade with (Transient) Logical Standby or Golden Gate may be used to reduce database downtime. Rolling upgrade with (Transient) Logical Standby or Golden Gate is not covered in this document.  For details on a transient logical rolling upgrade process see Document 949322.1 Oracle11g Data Guard: Database Rolling Upgrade Shell Script.
Post-upgrade steps
Includes both required and optional steps to perform following the Upgrade, such as updating DBFS, performing a general health check, re-configuring for Cloud Control, and cleaning up the old, unused home areas.
Troubleshooting
Links to helpful troubleshooting documents

 

Conventions

  • The steps documented apply to 11.2.0.3 upgrades to 11.2.0.4 unless specified differently
  • New database home will be /u01/app/oracle/product/11.2.0.4/dbhome_1
  • New grid home will be /u01/app/11.2.0.4/grid
  • For recommended patches on top of 11.2.0.4 the SuperCluster Supported Versions for your hardware type needs to be consulted.

Assumptions

  • The database and grid software owner is oracle.
  • The Oracle inventory group is oinstall.
  • The files ~oracle/dbs_group and ~root/dbs_group exist and contain the names of all database servers.
  • Current database home is /u01/app/oracle/product/11.2.0.3/dbhome_1, this may vary in your environment
  • Current grid home is 11.2.0.3 Grid Infrastructure home
  • The primary database to be upgraded is named PRIM.
  • The standby database associated with primary database PRIM is named STBY.
  • All SuperCluster Best Practices such as DISM not being enabled , ssctuner running and proper psets and pools configured for the global and non global zones (where applicable)

Oracle SuperCluster Supported Versions Documents

Oracle SuperCluster Supported Software Versions - All Hardware Types <Document 1567979.1>


 

Prepare the Existing Environment

Here are the steps performed in this section.

  • Plan for the Upgrade
  • Review Database 11.2.0.4 Upgrade Prerequisites
  • Download Required Files
  • Apply required patches
  • Run Exachk
  • Validate Readiness for Oracle Clusterware upgrade

Plan the Upgrade

In relation to planning the following items are recommended:

Test on non-production first

Upgrades or patches should always be applied first on test environments. Testing on non-production environments allows people to become familiar with the patching steps and learn how the patching will impact their system and applications. You need a series of carefully designed tests to validate all stages of the upgrade process. Executed rigorously and completed successfully, these tests ensure that the process of upgrading the production database is well understood, predictable, and successful. Perform as much testing as possible before upgrading the production database. Do not underestimate the importance of a complete and repeatable testing process. The types of tests to perform are the same whether you use Real Application Testing features like Database Replay or SQL Performance Analyzer, or perform testing manually.

There is an estimated downtime required of 30-90 minutes for the database upgrade. Additional downtime maybe required for post upgrade steps. This varies on factors such as the amount of PL/SQL that requires recompilation.

Resource management plans are expected to be persistent after the upgrade.

Preserve SQL Execution Plans with SQL Plan Management

SQL plan management prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by providing components for capturing, selecting, and evolving SQL plan information. SQL plan management is a preventative mechanism that records and evaluates the execution plans of SQL statements over time, and builds SQL plan baselines composed of a set of existing plans known to be efficient. The SQL plan baselines are then used to preserve performance of corresponding SQL statements, regardless of changes occurring in the system. See the Oracle Database Performance Tuning Guide for more information about using SQL Plan Management

Develop and Test a Recovery Plan

The ultimate success of your upgrade depends greatly on the design and execution of an appropriate backup strategy. Even though the Database Home and Grid Infrastructure Home will be upgraded out of place and therefore make rollback easier, the database and the filesystem should both be backed-up before committing the upgrade. See the Oracle Database Backup and Recovery Users Guide for information on database backups.
NOTE: Additional to having a backup of the database it is recommended to create a Guaranteed Restore Point (GRP). As long as the database COMPATIBLE parameter is not changed after creating the GRP, the database can easily be flashed back after a (failed) upgrade. Flashing back to a Guaranteed Restore Point will back out all changes in the database made after the creation of the Guaranteed Restore Point. If transactions are made after this point, then alternative methods must be employed to restore the transactions. Refer to the section 'Performing a Flashback Database Operation' in the 'Database Backup and Recovery User's Guide' for more information on flashing back a database. After a flashback the database needs to be opened in the 'Oracle home' from where the database was running before the upgrade.
 

Obtain Required Account Access

During the upgrade procedure access to database SYS account, operating system root and oracle user is required. Depending on what other components are upgraded access to ASMSNMP and DBSNMP is also required. Passwords in the password file are expected to be the same for all instances.

Review 11.2.0.4 Upgrade Prerequisites


The following prerequisites must be in place prior to performing the steps in this document to upgrade Database or Grid Infrastructure to 11.2.0.4 without failures. If any of these prerequisites are not met, peform the required updates now before continuing with the 11.2.0.4 upgrade.
     
   
 
ComponentPre-requisite for 11.2.0.4 Upgrade
Exadata storage servers and database servers

Version 11.2.3.2.1 plus cell Patch 16547261 or later  12.1.1.1 recommended.

If you choose to upgrade the cells as opposed to apply the patch please only do so as part a QFSDP. Quarterly Full Stack Download Patch

Sun Datacenter InfiniBand Switch 36 Version 1.3.3-2 or later
Grid Infrastructure software

11.2.0.3 BP20 or later
- or –
11.2.0.3 BP19 or earlier, plus fix for bug 14639430*
- or -
11.2.0.2 BP12 or later, plus fix for bug 14639430*

Database software

11.2.0.3  BP22 or above is prefered. If running below BP 22 and you choose to upgrade your cells as part of this action you need to be aware of

Bug 17854520 - Running cell version 11.2.3.3.0, or later, combined with 11.2.3.2.x or 11.2.3.1.x (e.g. during rolling cell patching or rack expansion) using Oracle Database 11.2.0.3 or 11.2.0.2 without patch 17854520 installed can cause database hang or crash.

 

* - See the Appendix for available fixes for bug 14639430

Generic requirements:

  • If you must update software a later version to meet the patching requirements for the 11.2.0.4 upgrade, then install the most recent version indicated in the SuperCluster Supported Version note for your hardware type.
  • Apply all overlay and additional patches for the installed Bundle Patch when required.  The list of required overlay and additional patches can be found in SuperCluster Supported Version note for your hardware type, Exadata Critical Issues <Document 1270094.1>, ans SuperCluster Critical Issues <Document 1452277.1>.
  • Verify that one-off patches currently installed on top of  11.2.0.3 are fixed in 11.2.0.4. Review the list of fixes provided with 11.2.0.4. For a list of provided fixes on top of 11.2.0.4 review the README. 
    • If you are unable to determine if a one-off patch is still required on top of 11.2.0.4 then contact Oracle Support.
  • Do not place the new ORACLE_HOME under /opt/oracle.
    • If this is done then see <Document 1281913.1> for additional steps required after software is installed.
  • Data Guard only - If there is a physical standby database associated with the databases being upgraded, then the following must be true:
    • The standby database is running in real-time apply mode as determined by querying v$archive_dest_status and verifying recovery_mode='MANAGED REAL TIME APPLY' for the local archive destination on the standby database. If there is a delay or real-time apply is not enabled then see Data Guard Concepts and Administration guide on how to configure these setting and remove the delay

Download Required Files

Based on the requirements determined earlier, download the following software into a staging area you prefer on one of the database servers in your cluster. As an example we use /u01/app/oracle/patchdepot but you can specify your own.

Data Guard - If there is a standby database then stage the files on one of the database servers in the standby system also.

Files to be staged on first database server only:
  • Oracle Database 11g, Release 2 (11.2.0.4) Patch Set 3:
    • "Oracle Database", typically the installation media for the database comes with two zip files.
    • "Oracle Grid Infrastructure"
    • "Deinstall tool"
See the README of <Patch 13390677> for the exact filenames you need for Solaris SPARC.
     
  • When available: download the latest Bundle Patch for 11.2.0.4
    • Typically recommended to download the latest OPatch release when applying a Bundle Patch. See <Patch 6880880> - download p6880880_112000_SOLARIS64.zip (for 11.2)
  • Patches to be applied on top of 11.2.0.4 
    • The latest 11.2.0.4 Bundle Patch. This note uses 11.2.0.4 BP1 (<patch 17628025>) as an example

Apply patches / updates where required before upgrading proceeds

Update OPatch in existing 11.2 Grid Home and existing 11.2 Database Homes on All Database Servers

If the latest OPatch release is not in place and (bundle) patches need to be applied on an existing 11.2.0.3 Grid Infrastructure and Database homes, then first update OPatch. Please make sure you extract the new Opatch as the home owner to prevent future issues such as false Exachk failures and opatch errors.
If there is a standby database, then perform these actions on the standby database servers also.

Apply required patches / updates identified in the section  'Upgrade Prerequisites' 

Run Exachk or HealthCheck

For Oracle SuperCluster run the latest release of Exachk to validate software, hardware, firmware, and configuration best practices. Resolve any issues identified by Exachk before proceeding with the upgrade. 

 

NOTE: It is recommended to run Exachk before and after the upgrade. When doing this, Exachk may offer recommendations for the compatible settings for database, asm and diskgroup. At some point it is recommended to advance compatible settings, but a conservative approach is advised. This is because advancing compatible settings can result in not being able to downgrade or revert back to the earlier setting. It is therefore recommended to revisit compatible parameters after system performance and stability has been accepted, when there is no potential for downgrade.

Validate Readiness for Oracle Clusterware upgrade using CVU and Exachck

Use the cluster verification utility (CVU) to validate readiness for the Oracle Clusterware upgrade. Review the Oracle Grid Infrastructure Installation Guide, chapter F 'How to upgrade to Oracle Grid Infrastructure' and see 'Using CVU to Validate Readiness for Oracle Clusterware Upgrades'. Unzip p13390677_112040SOLARIS64_3of7.zip (Oracle Grid Infrastructure) and also unzip the CVU zip file downloaded before to a directory of choice in the staging area. Before executing CVU as the owner of the Grid Infrastructure unset ORACLE_HOME, ORACLE_BASE and ORACLE_SID.

An example of running the pre-upgrade check, as follows:

(oracle)$   ./runcluvfy.sh stage -pre crsinst -upgrade \
     -src_crshome /u01/app/11.2.0/grid \
     -dest_crshome /u01/app/11.2.0.4/grid \
     -dest_version 11.2.0.4.0 \
     -n node1,node2 \
     -rolling \
     -fixup -fixupdir /home/oracle/fixit

 

 Use Exachck's upgrade module to check for additional upgrade best practices and last minute patch requirements. See the Exachk documentation via <Document 1070954.1> for more information. As of this authoring it will check DB/RAC concerns.

Early stage pre-upgrade check: analyze your databases to be upgraded with the Pre-Upgrade Information Tool

Download the Pre-Upgrade Information Tool from <document 884522.1> and run it from the environment of the database to be upgraded so there is sufficient time to address the items reported before upgrading. The Pre-Upgrade Information Tool is provided with the 11.2.0.4 software but since that is not yet installed download the tool from <document 884522.1> -  How to Download and Run Oracle's Database Pre-Upgrade Utility. The Pre-Upgrade Information Tool will be run again just prior to the database upgrade section in this document.

During the pre-upgrade steps, the pre-upgrade tool (preupgrd.sql) will warn to set the CLUSTER_DATABASE parameter to FALSE. Ignore this warning.

Data Guard - If there is a standby database, then run the command on one of the nodes of the standby database cluster also.

 


 

Install and Upgrade Grid Infrastructure to 11.2.0.4

The instructions in this section will perform the Grid Infrastructure software installation and upgrade to 11.2.0.4.  The Grid Infrastructure upgrade from 11.2.0.3 to 11.2.0.4 can only be performed in a RAC rolling fashion - do not proceed unless the Oracle Clusterware and ASM running. This procedure does not require downtime. The Grid Infrastructure home always needs to be on a later or equal version than the Database home. Therefore, upgrade Grid Infrastructure software before the database upgrade.

Data Guard - If there is a standby database, then run these commands on the standby system separately to upgrade the standby system Grid Infrastructure.  The standby Grid Infrastructure upgrade can be performed in parallel with the primary if desired. 


Here are the steps performed in this section.

  • Create a backup of the database server partitions
  • Create the new GI_HOME directory where 11.2.0.4 will be installed
  • Prepare installation software
  • Perform 11.2.0.4 Grid Infrastructure software installation and upgrade using OUI
  • Apply latest Bundle Patch contents to Grid Infrastructure Home using 'opatch napply' (when available)
  • Change Custom Scripts and Environment Variables to Reference the 11.2.0.4 Grid Home

 

Create a snapshot based backup of the database server partitions

Even while the Grid Infrastructure is being upgraded out-of-place, it is recommended to create a filesystem backup of the database server before proceeding.

Create the new Grid Infrastructure (GI_HOME) directory where 11.2.0.4 will be installed

In this document the new Grid Infrastructure home /u01/app/11.2.0.4/grid is used in all examples.  It is recommended that the new Grid Infrastructure home NOT be located under /opt/oracle.  If it is, then review <Document 1281913.1>. To create the new Grid Infrastructure home, run the following commands from the first database server.  You will need to substitute your Grid Infrastructure owner user name and Oracle inventory group name in place of oracle and oinstall, respectively. If your environment does not allow dcli then you will have to do this on each node of the cluster.
(root)# dcli -g ~/dbs_group -l root mkdir -p /u01/app/11.2.0.4/grid/
(root)# dcli -g ~/dbs_group -l root chown oracle /u01/app/11.2.0.4/grid
(root)# dcli -g ~/dbs_group -l root chgrp -R oinstall /u01/app/11.2.0.4

Prepare installation software

Unzip all 11.2.0.4 software. Run the following command on the database server where the software is staged. (software staging area is /u01/app/oracle/patchdepot in the following example). 
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p13390677_112040_SOLARIS64_3of7.zip \
          -d /u01/app/oracle/patchdepot

Perform the 11.2.0.4 Grid Infrastructure software installation and upgrade using the Oracle Universal Installer (OUI)

Perform these instructions as the Grid Infrastructure software owner (which is oracle in this document) to install the 11.2.0.4 Grid Infrastructure software and upgrade Oracle Clusterware and ASM from 11.2.0.2 or 11.2.0.3 to 11.2.0.4. The upgrade begins with Oracle Clusterware and ASM running and is performed in a rolling fashion.  The upgrade process manages stopping and starting Oracle Clusterware and ASM and making the new 11.2.0.4 Grid Infrastructure Home the active Grid Infrastructure Home. For systems with a standby database in place this step can be performed before, or after installation of Grid Infrastructure on the primary system.

If the upgrade fails, then refer to <document 969254.1>  - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure (CRS) on Linux/Unix.

The OUI installation log is located at /u01/app/oraInventory/logs.
For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.
 
Set the environment then run the installer, as follows:
(oracle)$ unset ORACLE_HOME ORACLE_BASE ORACLE_SID
(oracle)$ export DISPLAY=<your_xserver>:0
(oracle)$ cd /u01/app/oracle/patchdepot/grid
(oracle)$ ./runInstaller
Starting Oracle Universal Installer...
Perform the exact steps as described below on the installer screens:
  1. On Download Software Updates screen, select Skip software updates, and then click Next. 
  2. On Select Installation Options screen, select Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management, and then click Next.
  3. On Select Product Languages screen, select languages, and then click Next.
  4. On Grid Infrastructure Node Selection screen, verify all database nodes are shown and selected, and then click Next.
  5. On Privileged Operating System Groups screen, verify group names and change if desired, and then click Next.
  6. On Specify Installation Location screen, enter /u01/app/11.2.0.4/grid as the Software Location for the Grid Infrastructure home, and then click Next.
    • The directory should already exist and it is recommended that the Grid Infrastructure home NOT be placed under /opt/oracle
  7. On Summary screen, verify information presented about installation, and then click Install.
  8. On Install Product screen, monitor installation progress.
  9. On Execute Configuration scripts screen, do NOT execute the scripts yet but first perform the following steps in order:

Apply the latest Bundle Patchset  and the workaround for item DB_10 in SuperCluster Critical Issues

  
<Document 1452277.1> to the Grid Infrastructure Home using 'napply' before running rootupgrade.sh on all nodes in the cluster. Please note that is you do the workaround before the bundle patch you may have to redo it following the bundle patch.
 
In order to prevent against an extra stop and start of the Grid Infrastructure and Databases later on in the process, the latest 11.2.0.4 Bundle Patch needs to be applied to the Grid Infrastructure home on all nodes in the cluster at this stage. In this section 11.2.0.4 BP1 (<patch 17628025>) is used as an example.
The current phase the installation is in does not allow applying the Bundle Patch using the 'opatch auto' functionality. In this phase the patches contained in the Bundle Patch can only be applied as the GI home owner using the 'opatch napply' command for each patch component. 
Consult the Bundle Patch README (section 'Manual Steps for Apply/Rollback Patch') for further instructions on what patch components need to be applied for the Grid Infrastructure home using 'opatch napply' as this differs per Bundle Patch.
Note that at this stage README step 1 ("Stop the GI managed resources running from DB home") and step 2 ("Run the pre root script") as well as README steps 4,5,6,7 and 8 must NOT be executed. An example of what remains is applying 11.2.0.4 BP1 components using 'napply' on all nodes in the cluster as follows (this step assumes the latest Opatch was installed in the new GI home):

(oracle)$ export PATH=$PATH:/u01/app/11.2.0.4/grid/OPatch
(oracle)$ opatch napply -oh $ORACLE_HOME -local /u01/app/oracle/patchdepot/17628025/17628006
(oracle)$ opatch napply -oh $ORACLE_HOME -local /u01/app/oracle/patchdepot/17628025/17629416



NOTE: No other steps other than updating OPatch to the latest release and running 'opatch napply' as specified by the README should be done at this stage. Don't execute the commands rootcrs.pl -unlock and rootcrs.pl -patch for example. Also for Exadata 11.2.0.4 already has RDS linked-in by default.
NOTE: See <Document 1410202.1> for more information on how to apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is executed. Note that <Document 1410202.1> talks about Patch Set Updates (PSU) while Exadata only has bundle patches (BP)

If available apply 11.2.0.4 Bundle Patch Overlay Patches to the Grid Infrastructure Home as specified in  SuperCluster Supported Versions for your hardware type

Review  SuperCluster Supported Versions for your hardware type  to identify and apply patches that must be installed on top of the Bundle Patch just installed.

Apply Customer-specific 11.2.0.4 One-Off Patches to the Grid Infrastructure Home

If there are one-offs that need to be applied to the environment and they are approved by Oracle Support, then apply them now.

Change SGA memory settings for ASM

As sysasm, adjust sga_max_size and sga_target to a value of 2G. The values will become active with a next start of the ASM instances.

SYS@+ASM1> alter system set sga_max_size = 2G scope=spfile sid='*';
SYS@+ASM1> alter system set sga_target = 2G scope=spfile sid='*';

Verify values for memory_target and memory_max_target

Values should be as follows:

SYS@+ASM1> col value format a30
SYS@+ASM1> col name format a30
SYS@+ASM1> col sid format a5
SYS@+ASM1> select sid, name, value from v$spparameter
where name in
('memory_target','memory_max_target');

SID    NAME                      VALUE
------ ------------------------- -----------------------------------
*      memory_target             0
*      memory_max_target

When the values are not as expected, change them as follows:

SYS@+ASM1> alter system set memory_target=0 sid='*' scope=spfile;
SYS@+ASM1> alter system set memory_max_target=0 sid='*' scope=spfile /* required workaround */;
SYS@+ASM1> alter system reset memory_max_target sid='*' scope=spfile;

 Checks to do before executing rootupgrade.sh on Each Database Server

Before running rootupgrade.sh verify no active rebalance is running

Query gv$asm_operation to verify no active rebalance is running. A rebalance is running when the result of the following query is not equal to zero :


SYS@+ASM1> select count(*) from gv$asm_operation;

COUNT(*)
----------
0
Verify the workaround for item DB_10 in SuperCluster Critical Issues <Document 1452277.1>

 Before running rootupgrade.sh please review     SuperCluster - How to recover from a failed 11.2.0.3 to 11.2.0.4 upgrade due to HAIP (Doc ID 1996379.1) and make sure that you have made the changes form the cause section before preceding.

Execute rootupgrade.sh on each database server, as indicated in the Execute Configuration scripts screen the script must be executed on the local node first. The rootupgrade.sh script shuts down the earlier release installation, updates configuration details, and starts the new Oracle Clusterware installation. 

When rootupgrade.sh fails it is recommended to check the following output first to get more details :

  • output of rootupgrade.sh script itself
  • ASM alert.log
  • /u01/app/11.2.0.4/grid/cfgtoollogs/crsconfig/rootcrs_<node_name>.log

For rootupgrade.sh executions that fail, refer to <document 969254.1> - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure (CRS) on Linux/Unix.

 

After rootupgrade.sh completes successfully on the local node, you can run the script in parallel on other nodes except for the last node. When the script has completed successfully on all the nodes except the last node, run the script on the last node.  Do not run rootupgrade.sh on the last node until the script has run successfully on all other nodes.

 

  • First node rootupgrade.sh will complete with this output
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
  • Last node rootupgrade.sh will complete with this output
Replacing Clusterware entries in inittab
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Oracle Clusterware operating version was successfully set to 11.2.0.4.0

ASM upgrade has finished on last node.

 

NOTE: if the following error message is seen when running rootupgrade.sh, it can be ignored:

acfsutil registry: CLSU-00100: Operating System function: open64 failed with error data: 2
acfsutil registry: CLSU-00101: Operating System error message: No such file or directory
acfsutil registry: CLSU-00103: error location: OOF_1
acfsutil registry: CLSU-00104: additional error information: open64 (/dev/ofsctl)
acfsutil registry: ACFS-00502: Failed to communicate with the ACFS driver. Verify the ACFS driver has been loaded.

 

11. On Finish screen, click Close.

Perform an extra check on the status of the Grid infrastructure post upgrade by executing the following command from one of the compute nodes:

(root)# /u01/app/11.2.0.4/grid/bin/crsctl check cluster -all

The above command should show an online status for Cluster Ready Services, Cluster Synchronization Services and Event Manager on all nodes in the cluster, example as follows:

**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

When the cluster is not showing an online status for any of the components on any of the nodes, the issue needs to be researched before continuing. For troubleshooting see the MOS notes mentioned in the reference chapter.

Change Custom Scripts and Environment Variables to Reference the 11.2.0.4 Grid Home

Customized administration, login scripts, static instance registrations in listener.ora files and CRS resources that reference the Grid Infrastructure Home should be updated to refer to new Grid infrastructure home '/u01/app/11.2.0.4/grid'.



 

Install Database 11.2.0.4 Software

The steps in this section will install the Database 11.2.0.4 software into a new directory. This section only installs Database 11.2.0.4 software. It does not affect running databases hence all the steps below can be done without downtime.

Data Guard - If there is a separate system running a standby database and that system already has the Grid Infrastructure upgraded to 11.2.0.4, then run these steps on the standby system separately to install the Database 11.2.0.4 software.  The steps in this section can be performed in any of the following ways:
  • Install Database 11.2.0.4 software on the primary system first then the standby system.
  • Install Database 11.2.0.4 software on the standby system first then the primary system.
  • Install Database 11.2.0.4 software on both the primary and standby systems simultaneously.

Here are the steps performed in this section.

  • Prepare Installation Software
  • Perform 11.2.0.4 Database Software Installation with  the Oracle Universal Installer (OUI)
  • Update OPatch in the new Database Home on All Database Servers
  • Install Latest 11.2.0.4 Bundle Patch available for your operating system - Do Not Perform Post-Installation Steps
  • When available: Apply 11.2.0.4 Bundle Patch Overlay Patches as specified in Document 888828.1
  • When available: Apply Customer-specific 11.2.0.4 One-Off Patches

Prepare Installation Software

Unzip the 11.2.0.4 database software.  Run the following command on the database server where the software is staged.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p13390677_112040_SOLARIS64_1of7.zip -d /u01/app/oracle/patchdepot
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p13390677_112040_SOLARIS64_2of7.zip -d /u01/app/oracle/patchdepot

Perform 11.2.0.4 Database Software Installation with the Oracle Universal Installer (OUI) 

Note: For OUI installations or execution of important scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.
 
Set the environment then run the installer, as follows:

(root)# dcli -g ~/dbs_group -l root mkdir -p /u01/app/oracle/product/11.2.0.4/dbhome_1
(root)# dcli -g ~/dbs_group -l root chown -R oracle:oinstall /u01/app/oracle/product/11.2.0.4
(oracle)$ unset ORACLE_HOME ORACLE_BASE ORACLE_SID
(oracle)$ export DISPLAY=<your_xserver>:0
(oracle)$ cd /u01/app/oracle/patchdepot/database
(oracle)$ ./runInstaller

The OUI installation log is located at /u01/app/oraInventory/logs.

Perform the exact actions as described below on the installer screens:

  1. On 'Configure Security Updates' screen, fill in required fields, and then click Next.
  2. On 'Download Software Updates' screen, select 'Skip software updates', and then click Next.
  3. On 'Select Installation Option' screen, select 'Install database software' only, and then click Next.
  4. On 'Grid Installation Options' screen, select 'Oracle Real Application Clusters database installation', click 'Select All'. Verify all database servers are present in the list and are selected, and then click Next.
  5. On 'Select Product Languages' screen, select languages, and then click Next.
  6. On 'Select Database Edition', select 'Enterprise Edition', click 'Select Options to choose components to install', and then click Next.
  7. On 'Specify Installation Location', enter '/u01/app/oracle/product/11.2.0.4/dbhome_1' as the Software Location for the Database home, and then click Next.
    • It is recommended that the Database home NOT be placed under /opt/oracle.
  8. On 'Privileged Operating System Groups' screen, verify group names, and then click Next.
  9. On 'Perform Prerequisite Checks' screen, resolve any failed check or warning.
  10. On 'Summary screen', verify information presented about installation, and then click Install.
  11. On 'Install Product' screen, monitor installation progress.
  12. On 'Execute Configuration Scripts' screen, execute root.sh on each database server as instructed, and then click OK
  13. On 'Finish' screen, click Close.

Update OPatch in the New Database Home on All Database Servers

After installation of the new 11.2.0.4 Database Homes install the latest OPatch.

Install Latest 11.2.0.4 Bundle Patch for Oracle  Exadata Database Machine to the Database Home  - Do Not Perform Post-Installation Steps

Review <Document 888828.1> for the latest release information on available Bundle Patches. At the time of writing 11.2.0.4 BP1 <patch 17628025> was used, but the same applies to any later Bundle Patch. 
The commands to install the latest 11.2.0.4 Bundle Patch are run on each database server individually, hence the Bundle Patch must be copied to all database servers.  They can be run in parallel across database servers. Applying the latest bundle patch requires the latest OPatch to be installed.
Follow steps in section 2 of the README ('2 Patch Installation and Deinstallation') but only use steps to apply the patch to the 11.2.0.4 Database homes (example: 'Case 3').
Do not perform patch post-installation.  Patch post-installation steps will be run after the database is upgraded. 

When available: Apply 11.2.0.4 Bundle Patch Overlay Patches to the Database Home as Specified in Document 888828.1

Review <Document 888828.1> to identify and apply patches that must be installed on top of the Bundle Patch just installed.  If there are SQL commands that must be run against the database as part of the patch application, postpone running the SQL commands until after the database is upgraded.

Apply Customer-specific 11.2.0.4 One-Off Patches

If there are one-offs that need to be applied to the environment and they are approved by Oracle Support, then apply them now.  If there are SQL commands that must be run against the database as part of the patch application, postpone running the SQL commands until after the database is upgraded.

 

Upgrade Database to 11.2.0.4

 

The commands in this section will perform the database upgrade to 11.2.0.4.

For Data Guard setups - Unless otherwise indicated, run these steps only on the primary database.

Here are the steps performed in this section.

    • Backing up the database and creating a Guaranteed Restore Point
    • Analyze the Database to Upgrade with the Pre-Upgrade Information Tool
    • Data Guard only - Synchronize Standby and Switch to 11.2.0.4
    • Data Guard only - Disable Fast-Start Failover and Data Guard Broker
    • Before starting the database upgrade assistant stop and disable all services with PRECONNECT as option for 'TAF Policy specification'
    • Upgrade the Database with Database Upgrade Assistant (DBUA)
    • Review and perform steps in Oracle Upgrade Guide, Chapter 4 'Post-Upgrade Tasks for Oracle Database'
    • Change Custom Scripts and Environment Variables to Reference the 11.2.0.4 Database Home
    • Add Underscore Initialization Parameters Back
    • When available: run 11.2.0.4 Bundle Patch Post-Installation Steps
    • Data Guard only - Enable Fast-Start Failover and Data Guard Broker

The database will be inaccessible to users and applications during the upgrade (DBUA) steps. Database downtime estimated is 30 - 90 minutes, but actual downtime will depend on factors such as the amount of PL/SQL that needs recompilation. Note that it is not a requirement all databases are upgraded to the latest release. It is possible to have multiple releases of the Oracle Database Homes running on the same system. The benefit of having multiple Oracle Homes is that multiple releases of different databases can run. The disadvantage is that more planned maintenance is required in terms of patching. Older database releases may lapse out of the regular patching lifecycle policy in time. Having multiple Oracle homes on the same node requires more disk space.

Backing up the database and creating a Guaranteed Restore Point

If not done already, before proceeding with the upgrade a full backup of the database should be made. Additional to this full back backup it is recommended to create a Guaranteed Restore Point (GRP). As long as the database COMPATIBLE parameter is not changed after creating the GRP, the database can be flashed back after a (failed) upgrade. In order to create a GRP the database must be in Archive Redo Log mode. The GRP can be created when the database is in OPEN mode as follows:

SYS@PRIM1> CREATE RESTORE POINT grpt_bf_upgr GUARANTEE FLASHBACK DATABASE; 

After creating the GRP, verify status as follows:

SYS@PRIM1> SELECT * FROM V$RESTORE_POINT where name = 'GRPT_BF_UPGR';

    

NOTE: After a successful upgrade the GRP should be deleted.

 

Analyze the Database to Upgrade with the Pre-Upgrade Information Tool

The pre-upgrade information tool is provided with the 11.2.0.4 software.  It is also provided standalone as an attachment to <document 884522.1>.  Run this tool to analyze the 11.2.0.3 database prior to upgrade.

Run Pre-Upgrade Information Tool

At this point the database is still running with 11.2.0.3 software. Connect to the database with your environment set to 11.2.0.3 and run the pre-upgrade information tool that is located in the 11.2.0.4 database home, as follows:
SYS@PRIM1> spool preupgrade_info.log
SYS@PRIM1> @/u01/app/oracle/product/11.2.0.4/dbhome_1/rdbms/admin/utlu112i.sql

During the pre-upgrade steps, the pre-upgrade tool (preupgrd.sql) will warn to set the CLUSTER_DATABASE parameter to FALSE. However when using DBUA this is done automatically so the warning can be ignored.

Handle obsolete and underscore parameters

Obsolete and underscore parameters will be identified by the pre-upgrade information tool.  During the upgrade, DBUA will remove the obsolete and underscore parameters from the primary database initialization parameter file.  Some underscore parameters that DBUA removes will be added back in later after DBUA completes the upgrade.

Data Guard only - DBUA will not affect parameters set on the standby, hence obsolete parameters and some underscore parameters must be removed manually if set. Typical values that need to be unset before starting the upgrade are as follows:
SYS@STBY1> alter system reset cell_partition_large_extents scope=spfile;
SYS@STBY1> alter system reset "_backup_ksfq_bufsz" scope=spfile;
SYS@STBY1> alter system reset "_backup_ksfq_bufcnt" scope=spfile;
SYS@STBY1> alter system reset "_lm_rcvr_hang_allow_time" scope=spfile;
SYS@STBY1> alter system reset "_kill_diagnostics_timeout" scope=spfile;
SYS@STBY1> alter system reset "_arch_comp_dbg_scan" scope=spfile;

Review pre-upgrade information tool output

Review the remaining output of the pre-upgrade information tool.  Take action on areas identified in the output.

Data Guard only - Synchronize Standby and Change the Standby Database to use the new 11.2.0.4 Database Home

Perform these steps only if there is a physical standby database associated with the database being upgraded.

As indicated in the prerequisites section above, the following must be true:
  • The standby database is running in real-time apply mode.
  • The value of the LOG_ARCHIVE_DEST_n database initialization parameter on the primary database that corresponds to the standby database must contain the DB_UNIQUE_NAME attribute, and the value of that attribute must match the DB_UNIQUE_NAME of the standby database.

Flush all redo generated on the primary and disable the broker

To ensure all redo generated by the primary database running 11.2.0.3 is applied to the standby database running 11.2.0.3, all redo must be flushed from the primary to the standby.
First, verify the standby database is running recovery in real-time apply.  Run the following query connected to the standby database.  If this query returns no rows, then real-time apply is not running. Example as follows:
SYS@STBY1> select dest_name from v$archive_dest_status
  where recovery_mode = 'MANAGED REAL TIME APPLY';

DEST_NAME
------------------------------
LOG_ARCHIVE_DEST_2
Shutdown the primary database and restart just one instance in mount mode, as follows:
(oracle)$ srvctl stop database -d PRIM -o immediate
(oracle)$ srvctl start instance -d PRIM -n dm01db01 -o mount
Verify the primary database has specified db_unique_name of the standby database in the log_archive_dest_n parameter setting, as follows:
SYS@PRIM1> select value from v$parameter where name = 'log_archive_dest_2';

VALUE
-------------------------------------------------------------------------------
service="gih_stby" LGWR SYNC AFFIRM delay=0 optional compression=disable max_fa
ilure=0 max_connections=1 reopen=300 db_unique_name="STBY" net_timeout=30 valid
_for=(all_logfiles,primary_role)
 

Data Guard only - Disable Fast-Start Failover and Data Guard Broker

Disable Data Guard broker if it is configured as Data Guard broker is incompatible with running from different releases. If fast-start failover is configured, it must be disabled before broker configuration is disabled, as follows.

DGMGRL> disable fast_start failover;
DGMGRL> disable configuration;

Also, disable the init.ora setting dg_broker_start in both primary and standby as follows:

SYS@PRIM1> alter system set dg_broker_start = false;
SYS@STBY1> alter system set dg_broker_start = false;

Flush all redo to the standby database using the following command. Standby database db_unique_name in this example is 'STBY'. Monitor the alert.log of the standby database to verify for the  'End-of-Redo' message. Example as follows:
SYS@PRIM1> alter system flush redo to 'STBY';

Shutdown the primary database.

Wait until the 'End-of-Redo' on the standby is confirmed, as follows:


End-Of-REDO archived log file has not been recovered
Incomplete recovery SCN:0:1371457 archive SCN:0:1391461
Physical Standby did not apply all the redo from the primary.
Tue May 22 13:03:51 2013
Media Recovery Log +RECO/prim/archivelog/2011_11_22/thread_2_seq_39.1090.767883831
Identified End-Of-Redo (move redo) for thread 2 sequence 39 at SCN 0x0.153b65
Resetting standby activation ID 338172592 (0x14281ab0)
Media Recovery Waiting for thread 2 sequence 40
Tue May 22 13:03:52 2013
Standby switchover readiness check: Checking whether recoveryapplied all redo..
Physical Standby applied all the redo from the primary.
Standby switchover readiness check: Checking whether recoveryapplied all redo..
Physical Standby applied all the redo from the primary.



Then shutdown the primary database, as follows:

(oracle)$ srvctl stop database -d PRIM -o immediate

Shutdown the standby database and restart it in an 11.2.0.4 database home

Perform the following steps on the standby database server:

Shutdown the standby database, as follows:
(oracle)$ srvctl stop database -d stby
Copy required files from 11.2.0.3 home to the 11.2.0.4 database home.
The following example shows the copying of the password file, but also other files like init.ora files may be required to copy:
(oracle)$ dcli -l oracle -g ~/dbs_group \
          'cp /u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwstby* \
          /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs'
Edit standby environment files
  • Edit the standby database entry in /var/opt/oracle/oratab  to point to the new 11.2.0.4 home.
  • On both the primary and standby database servers, ensure the tnsnames.ora entries are available to the database after it has been upgraded.  If using the default location for tnsnames.ora, $ORACLE_HOME/network/admin, then copy tnsnames.ora from the old home to the new home.
  • If using Data Guard Broker to manage the configuration, then modify the broker required SID_LIST listener.ora entry on all nodes to point to the new ORACLE_HOME.  For example
SID_LIST_LISTENER =
(SID_LIST =
  (SID_DESC =
  (GLOBAL_DBNAME=PRIM_dgmgrl)
  (SID_NAME = PRIM1)
  (ORACLE_HOME = /u01/app/oracle/product/11.2.0.4/dbhome_1)
  )
)
  • After this reload the listener on all nodes, as follows:
(oracle)$ lsnrctl reload listener
Update the OCR configuration data for the standby database by running the 'srvctl upgrade' command from the new database home as follows.
(oracle)$ srvctl upgrade database -d stby -o /u01/app/oracle/product/11.2.0.4/dbhome_1
Start the standby, as follows (add "-o mount" option for databases running Active Data Guard):
(oracle)$ srvctl start database -d stby

Start all primary instances in restricted mode

DBUA requires all RAC instances to be running from the current database home before starting the upgrade. To prevent an application from accidentally connecting to the primary database and performing work causing the standby to fall behind, startup the primary database in restricted mode, as follows:
(oracle)$ srvctl start database -d PRIM -o restrict
 

Upgrade the Database with Database Upgrade Assistant (DBUA)

NOTE: Before starting the database upgrade assistant all databases that will be upgraded having services configured with PRECONNECT as option for 'TAF Policy specification' should have these services  stopped and disabled. Once a database upgrade is completed, services can be enabled and brought online. Not disabling services having the PRECONNECT option for 'TAF Policy specification' will cause an upgrade to fail.

For each database being upgraded use the srvctl command to determine if a 'TAF policy specification' with 'PRECONNECT' is defined. Example as follows:

(oracle)$ srvctl config service -d <db_unique_name> | grep -i preconnect | wc -l
 

For each database being upgraded the output of the above command should be 0. When the output of the above command is not equal to 0, find the specific service(s) for which PRECONNECT is defined. Example as follows:
(oracle)$ srvctl config service -d <db_unique_name> -s <service_name>
 
Those services found need to be stopped and disabled before proceeding with the upgrade. Example as follows:

(oracle)$ srvctl stop service -d <db_unique_name> -s "<service_name_list>"
(oracle)$ srvctl disable service -d <db_unique_name> -s "<service_name_list>"
 

Reference bug: 16539215
 
Run DBUA to upgrade the primary database.  All database instances of the database you are upgrading must be brought up or DBUA may hang. If there is a standby database, the primary database should be left running in restricted mode, as performed in the previous step.

Oracle recommends removing the value for the init.ora parameter 'listener_networks' before starting DBUA. The value will be restored after running DBUA. Be sure to obtain the original value before removing, as follows:

SYS@PRIM1> set lines 200
SYS@PRIM1> select name, value from v$parameter where name='listener_networks';

If the value for parameter listener_networks was set, then the value needs to be removed as follows:

SYS@PRIM1> alter system set listener_networks='' sid='*' scope=both;
 
Run DBUA from the new 11.2.0.4 ORACLE_HOME as follows:
(oracle)$ /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/dbua

  
NOTE: When running multiple DBUA sessions for multiple databases at the same time, make sure each next DBUA session is started no less than 5 minutes after DBUA for the current session has reached 'Upgrading Oracle Server' stage. For example DBUA session #2 should be started no less than 5 minutes after DBUA session #1 has reached 'Upgrading Oracle Server' stage.
 

Perform these mandatory actions on the DBUA screens:
  1. On 'Welcome screen', click Next.
  2. On 'Select Database' screen, select the database to be upgraded, and then click Next.
    • Enter a local instance name if requested.
  3. On 'Upgrade Options' screen, select the desired options, and then click Next.
    • If you have a standby database, then do NOT select to turn off archiving during the upgrade.
  4. On 'Recovery and Diagnostic Locations' screen, click Next.
  5. On 'Management Options' screen, select the option of choice
  6. On 'Summary' screen, verify information presented about the database upgrade, and then click Finish.
  7. On 'Progress' screen, when the upgrade is complete, click OK.
  8. On 'Upgrade Results' screen, review the upgrade result and investigate any failures, and then click Close.
The database upgrade to 11.2.0.4 is complete.  There are additional actions to perform to complete configuration.

Review and perform steps in Oracle Upgrade Guide, Chapter 4 'Post-Upgrade Tasks for Oracle Database'

The Oracle Upgrade Guide documents required and recommended tasks to perform after upgrading to 11.2.0.4.  Since the database was upgraded from 11.2.0.3, some tasks do not apply.  The following list is the minimum set of tasks that should be reviewed for your environment:
  • Update Environment Variables
  • Upgrade the Recovery Catalog
  • Upgrade the Time Zone File Version when not already done earlier.
  • For upgrades done by DBUA tnsnames.ora entries for that particular database will be updated in the tnsnames.ora in the new home. However entries not related to a database upgrade or entries related to standby database will not be updated as part of any DBUA action. The synchronisation of these entries needs to be done manually. Ifile directives used in tnsnames.ora, for example in the grid home, need to be updated to point to the new database home.

Change Custom Scripts and Environment Variables to Reference the 11.2.0.4 Database Home

The primary database is upgraded and is now running from the 11.2.0.4 database home. Customized administration and login scripts that reference database home ORACLE_HOME should be updated to refer to /u01/app/oracle/product/11.2.0.4/dbhome_1.

Underscore Initialization Parameters

During the upgrade DBUA removes obsolete and underscore initialization parameters. Underscore parameters previously setup need to be double checked for and added back.
Run the following command verify this parameter:
SYS@PRIM1> select distinct(value) from gv$parameter where name = '_file_size_increase_increment';

VALUE
--------------------------------------------------------------------------------
2143289344
If the value for "_file_size_increase_increment" is missing or not set to the expected value of 2143289344, set it to the right value. Example as follows:
  
SYS@PRIM1> alter system set "_file_size_increase_increment"=2143289344 sid='*' scope=both;
For all node types: values for "_kill_diagnostics_timeout" and "_lm_rcvr_hang_allow_time" should not exist after the upgrade, run the following command to verify this:
SYS@PRIM1> select distinct(name), value
SYS@PRIM1> from  gv$parameter
SYS@PRIM1> where name in ('_kill_diagnostics_timeout','_lm_rcvr_hang_allow_time');

NAME                        VALUE
--------------------------------------------------------------------------------
_lm_rcvr_hang_allow_time    140
_kill_diagnostics_timeout   140
 
The values need to be removed if stil exist. Example as follows:
SYS@PRIM1> alter system reset "_kill_diagnostics_timeout" sid='*' scope=spfile;
SYS@PRIM1> alter system reset "_lm_rcvr_hang_allow_time" sid='*' scope=spfile;
 

The value for the init.ora parameter 'listener_networks' removed before the upgrade needs to be restored now as follows:

SYS@PRIM1> alter system set listener_networks='<original value>' sid='*' scope=both;


Data Guard only - DBUA will not affect parameters set on the standby, hence previously set underscore parameters will remain in place, however, since the values were reset in a previous step they need to be restored now. For standard installations, only the following additional underscore parameters need to be added back for a standby database.

SYS@STBY1> alter system set "_file_size_increase_increment"=2143289344 sid='*' scope=both;

For any parameter setup in the spfile only, be sure to restart the databases to make the settings effective

Perform 11.2.0.4 Bundle Patch for Oracle Exadata Database Machine Post-Installation Steps

If a Bundle Patch for Oracle Exadata Database Machine was installed in the 11.2.0.4 database home before the database was upgraded then post-installation step that require running SQL against each database need to be performed. Perform the Patch Post-Installation steps documented in the patch README on one database server only.  Review the patch README for most up to date details and exact details.  The steps below are those required for BP1.
Run catbundle.sql to load the required bundle patch SQL, as follows.
(oracle)$ cd $ORACLE_HOME
SYS@PRIM1> @?/rdbms/admin/catbundle.sql exa apply

Navigate to the <ORACLE_HOME>/cfgtoollogs/catbundle directory (if ORACLE_BASE is defined then the logs will be created under <ORACLE_BASE>/cfgtoollogs/catbundle), and check the following log files for any errors like "grep ^ORA <logfile> | sort -u". If there are errors, then refer to Section 3 of the README, "Known Issues". Here, the format of the <TIMESTAMP> is YYYYMMMDD_HH_MM_SS
catbundle_EXA_<database SID>_APPLY_<TIMESTAMP>.log
catbundle_EXA_<database SID>_GENERATE_<TIMESTAMP>.log

 

NOTE: be sure to check all objects are valid after running the catbundle script. If invalid objects are found run utlrp until no rows are returned

Data Guard only - Enable Fast-Start Failover and Data Guard Broker


Update the static listener entry in the listener.ora file on all nodes where a standby instance can run so that it reflects the new ORACLE_HOME used, as follows:

SID_LIST_LISTENER =
   (SID_LIST =
      (SID_DESC =
         (GLOBAL_DBNAME = STBY_DGMGRL)
         (ORACLE_HOME = /u01/app/oracle/product/11.2.0.4/dbhome_1)
         (SID_NAME = STBY1)
      )
   )
                   
If Data Guard broker and fast-start failover were disabled in a previous step, then re-enable them in SQL-Plus and dgmgrl, as follows:

SYS@PRIM1> alter system set dg_broker_start = true sid='*';
SYS@STBY1> alter system set dg_broker_start = true sid='*';
DGMGRL> enable configuration
DGMGRL> enable fast_start failover
      @div ends here

 

Post-upgrade Steps

      Here are the steps performed in this section.
      • Remove Guaranteed Restore Point if still exists
      • DBFS only - Perform DBFS Required Updates
      • Run Exachk or HealthCheck
      • Re-configure the Enterprise Manager Cloud Control 12c targets in the EM Console to use the new Oracle Homes
      • Deinstall the 11.2.0.3 Database and Grid Homes
      • Re-configure RMAN Media Management Library

Remove Guaranteed Restore Point 

If the upgrade has been successful and a Guaranteed Restore Point (GRP) was created, it should be removed now as follows:

SYS@PRIM1> DROP RESTORE POINT GRPT_BF_UPGR;

DBFS only - Perform DBFS Required Updates

When the DBFS database is upgraded to 11.2.0.4 the following additional actions are required:

Obtain latest mount-dbfs.sh script from Document 1054431.1

Download the latest mount-dbfs.sh script attached to <Document 1054431.1>, place it in a (new) directory and update the CRS resource:
(oracle)$ dcli -l oracle -g ~/dbs_group mkdir -p /home/oracle/dbfs/scripts
(oracle)$ dcli -l oracle -g ~/dbs_group -f /u01/app/oracle/patchdepot/mount-dbfs.sh -d /home/oracle/dbfs/scripts
(oracle)$ crsctl modify resource dbfs_mount -attr "ACTION_SCRIPT=/home/oracle/dbfs/scripts/mount-dbfs.sh"

Edit mount-dbfs.sh script and Oracle Net files for the new 11.2.0.4 environment

Using the variable settings from the original mount-dbfs.sh script, edit the variable settings in the new mount-dbfs.sh script to match your environment.  The setting for variable ORACLE_HOME must be changed to match the 11.2.0.4 ORACLE_HOME (/u01/app/oracle/product/11.2.0.4/dbhome_1). 

Edit tnsnames.ora used for DBFS to change the directory referenced for the parameters PROGRAM and ORACLE_HOME to the new 11.2.0.4 database home.
fsdb.local =
(DESCRIPTION =
   (ADDRESS =
     (PROTOCOL=BEQ)
     (PROGRAM=/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle)
     (ARGV0=oraclefsdb1)
     (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
     (ENVS='ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1,ORACLE_SID=fsdb1')
   )
   (CONNECT_DATA=(SID=fsdb1))
)

If the location of Oracle Net files changed as a result of the upgrade, then change the setting of TNS_ADMIN in shell scripts and login files.
If using wallet-based authentication, recreate the symbolic link to /sbin/mount.dbfs. If you are using the Oracle Wallet to store the DBFS password, then run the following commands:
(root)# dcli -l root -g ~/dbs_group ln -sf \
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/dbfs_client /sbin/mount.dbfs
(root)# dcli -l root -g ~/dbs_group ln -sf \
/u01/app/oracle/product/11.2.0.4/dbhome_1/lib/libnnz11.so /usr/local/lib/libnnz11.so
(root)# dcli -l root -g ~/dbs_group ln -sf \
/u01/app/oracle/product/11.2.0.4/dbhome_1/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
(root)# dcli -l root -g ~/dbs_group ldconfig

Run Exachk

Run Exachk again to validate software, hardware,  firmware, and configuration best practices after the upgrade.
Optional: Deinstall the  11.2.0.3 Database and Grid Homes

After the upgrade is complete and the database and application have been validated and in use for some time, the  11.2.0.3 database and grid homes can be removed using the deinstall tool.  Run these commands on the first database server.  The deinstall tool will perform the deinstallation on all database servers.  Refer to Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Solaris for additional details of the deinstall tool.

Before running the deinstall tool to remove the old database and grid homes, run deinstall with the -checkonly option to verify the actions it will perform.  Ensure the following:
  • There are no databases configured to use the home.
  • The home is not a configured Grid Infrastructure home.
  • ASM is not detected in the Oracle Home.
To deinstall Database and Grid infrastructure, the example steps are as follows:
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p10098816_112020_SOLARIS64_7of7.zip -d /u01/app/oracle/patchdepot

(oracle)$ cd /u01/app/oracle/patchdepot/deinstall
(oracle)$ ./deinstall -checkonly -home /u01/app/oracle/product/11.2.0/dbhome_1/
(oracle)$ ./deinstall -home /u01/app/oracle/product/11.2.0/dbhome_1/

(root)# dcli -l root -g ~/dbs_group chmod -R 755 /u01/app/11.2.0/grid
(root)# dcli -l root -g ~/dbs_group chown -R oracle /u01/app/11.2.0/grid
(root)# dcli -l root -g ~/dbs_group chown oracle /u01/app/11.2.0

(oracle)$ cd /u01/app/oracle/patchdepot/deinstall
(oracle)$ ./deinstall -checkonly -home /u01/app/11.2.0/grid/
(oracle)$ ./deinstall -home /u01/app/11.2.0/grid/
When not immediately deinstalling the previous Grid Infrastructure, rename the old Grid Home directory on all nodes such that operators cannot mistakenly execute crsctl commands from the wrong Grid Infrstructure Home. 

Re-configure RMAN Media Management Library


Database installations that use an RMAN Media Management Library (MML) may require re-configuration of the Oracle Database Home after the upgrade. Most often recreating a symbolic link to vendor provided MML is sufficient.
For specific details see the MML documentation.

 


 

Troubleshooting

 

Revision History

 

 Date

 Change

Aug 1 2014

  • Cloned and Edited Exadata procedures for SuperCluster.

 

Appendix

Oracle Documentation  

Patch matrix for 14639430 - patch requirements for rolling-back upgrades from 11.2.0.4 Linux and Solaris x86-64 

Database Patchset and Bundle Patch Required Patch 14639430 For SPARC
11.2.0.3 BP20 onwards No action
11.2.0.3 BP8 -19 Request patch via Oracle Support if you opt to roll back

Patch matrix for minimal required patches on top of 11.2.0.4 

Patches to be applied before running rootupgrade / upgrading the database

Latest Exadata Bundle Patch

Community Discussions

Still have questions? Use the communities window below to search for similar discussions or start a new discussion on this subject. (Window is the live community not a screenshot)

Click here to open in main browser window

References

<NOTE:1602809.1> - 11gR2: Clusterware Oraagent doesn't set the local_listener when listener_networks is set in the spfile.
<NOTE:1410202.1> - How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed?
<NOTE:1281913.1> - root Script (root.sh or rootupgrade.sh) Fails if ORACLE_BASE is set to /opt/oracle
<NOTE:1567979.1> - Oracle SuperCluster Supported Software Versions - All Hardware Types
<NOTE:1452277.1> - SuperCluster Critical Issues
<NOTE:1270094.1> - Exadata Critical Issues
<NOTE:1284070.1> - Updating key software components on database hosts to match those on the cells

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback