Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-79-1681467.1
Update Date:2017-05-25
Keywords:

Solution Type  Predictive Self-Healing Sure

Solution  1681467.1 :   11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 Grid Infrastructure and Database Upgrade on Exadata Database Machine running Oracle Linux  


Related Items
  • Exadata X4-2 Hardware
  •  
  • Exadata X3-2 Hardware
  •  
  • Oracle Exadata Hardware
  •  
  • Exadata X3-8 Hardware
  •  
  • Oracle Exadata Storage Server Software
  •  
  • Exadata Database Machine X2-2 Hardware
  •  
  • Exadata Database Machine V2
  •  
Related Categories
  • PLA-Support>Eng Systems>Exadata/ODA/SSC>Oracle Exadata>DB: Exadata_EST
  •  




In this Document
Purpose
Details
 Oracle Exadata Database Machine Maintenance
 Overview
 Conventions
 Assumptions
 References
 Oracle Documentation
 My Oracle Support Documents
 Prepare the Existing Environment
 Planning
 Testing on non-production first
 SQL Plan Management
 Recoverability
 Account Access
 Review 12.1.0.2 Upgrade Prerequisites
 Sun Datacenter InfiniBand Switch 36 is running software release 1.3.3-2 or later, 2.1.3-4 recommended
 Grid Infrastructure Software
 Database Software
 Generic requirements
 Exadata Storage Server software recommended and minimum release 
 Do not place the new ORACLE_HOME under /opt/oracle.
 Data Guard only - If there is a physical standby database associated with the databases being upgraded, then the following must be true:
 Download Required Files
 Patch matrix for 12539000 - Required Patches when upgrading from 11.2.0.2 
 Patch matrix for 14639430 - patch requirements for rolling-back upgrades from 12.1.0.2 to 11.2.0.2 
 Patch matrix for 14639430 - patch requirements for rolling-back upgrades from 12.1.0.2 to 11.2.0.3 
 Patch matrix for 13460353 - Optional patch for those who will be creating 11.2.0.2 databases while running a 12.1.0.2 Grid infrastructure
 Patch matrix for 13460353 - Optional patch for those who will be creating 11.2.0.3 databases while running a 12.1.0.2 Grid infrastructure
 Apply patches / updates where required before upgrading proceeds
 Update OPatch in existing 11.2 and 12.1 Grid Home and Database Homes on All Database Servers 
 For Exadata Storage Servers and Database Servers on releases earlier than Exadata 11.2.3.3.1
 For 11.2.0.2 Grid Infrastructure and Database:
 For 11.2.0.3 Grid Infrastructure and Database:
 Run Exachk or HealthCheck
 Validate Readiness for Oracle Clusterware upgrade using CVU and Exachck
 Early stage pre-upgrade check: analyze your databases to be upgraded with the Pre-Upgrade Information Tool
 Install and Upgrade Grid Infrastructure to 12.1.0.2
 Install MGMTDB or not
 Validate hugepage configuration for new ASM SGA requirements
 Create a snapshot based backup of the database server partitions
 Create the new Grid Infrastructure (GI_HOME) directory where 12.1.0.2 will be installed
 Prepare installation software
 Change css misscount setting back to default before upgrading
 Perform the 12.1.0.2 Grid Infrastructure software installation and upgrade using OUI
 If available: Apply recommended patches to the Grid Infrastructure before running rootupgrade.sh
 Apply Customer-specific 12.1.0.2 One-Off Patches to the Grid Infrastructure Home
 When required relink oracle binary with RDS
 Verify values for memory_target, memory_max_target and use_large_pages
 Actions to take before executing rootupgrade.sh on each database server
 Execute rootupgrade.sh on each database server
 Continue with 12.1.0.2 GI installation in OUI
 Verify cluster status
 Change Custom Scripts and environment variables to Reference the 12.1.0.2 Grid Home
 Modify the dbfs_mount cluster resource
 Install Database 12.1.0.2 Software
 Prepare Installation Software
 Create the new Oracle DB Home directory on all database server nodes
 Perform 12.1.0.2 Database Software Installation with the Oracle Universal Installer (OUI)
 When required relink Oracle Executable in Database Home with RDS
 If necessary: Install Latest OPatch 12.1 in the Database Home on All Database Servers
 When available: Install the latest 12.1.0.2 GI PSU (which includes the DB PSU) to the Database Home when available - Do Not Perform Post-Installation Steps
 Stage the patch
 Create OCM response file if required
 Patch 12.1.0.2 database home
 Skip patch post-installation steps
 When available: Apply 12.1.0.2 Patch Overlay Patches to the Database Home as Specified in Document 888828.1
 Apply Customer-specific 12.1.0.2 One-Off Patches
 Upgrade Database to 12.1.0.2
 Backing up the database and creating a Guaranteed Restore Point
 Analyze the Database to Upgrade with the Pre-Upgrade Information Tool
 Run Pre-Upgrade Information Tool
 Handle obsolete and underscore parameters
 Review pre-upgrade information tool output
 Data Guard only - Synchronize Standby and Change the Standby Database to use the new 12.1.0.2 Database Home
 Flush all redo generated on the primary and disable the broker
 Data Guard only - Disable Fast-Start Failover and Data Guard Broker
 Shutdown the primary database.
 Shutdown the standby database and restart it in the 12.1.0.2 database home
 Start all primary instances in restricted mode
 Upgrade the Database with Database Upgrade Assistant (DBUA)
 Review and perform steps in Oracle Upgrade Guide, Chapter 4 'Post-Upgrade Tasks for Oracle Database'
 Change Custom Scripts and environment variables to Reference the 12.1.0.2 Database Home
 Underscore Initialization Parameters
 When required: run PSU Post-Installation Steps
 Data Guard only - Enable Fast-Start Failover and Data Guard Broker
 Post-upgrade Steps
 Remove Guaranteed Restore Point 
 DBFS only - Perform DBFS Required Updates
 Obtain latest mount-dbfs.sh script from Document 1054431.1
 Edit mount-dbfs.sh script and Oracle Net files for the new 12.1.0.2 environment
 Run Exachk or HealthCheck
 Verify nproc setting (CRS_LIMIT_NPROC) for Grid Infrastructure
 Optional: Deinstall the 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 Database and Grid Homes
 Re-configure RMAN Media Management Library
 Restore settings for concurrent statistics gathering
 Advance COMPATIBLE.ASM diskgroup attribute 
 Troubleshooting
 Revision History
References


Applies to:

Oracle Exadata Hardware - Version 11.2.1.2.1 and later
Exadata X3-8 Hardware - Version All Versions and later
Exadata Database Machine X2-2 Hardware - Version All Versions and later
Exadata X4-2 Hardware - Version All Versions and later
Oracle Exadata Storage Server Software - Version 11.2.1.2.0 and later
Information in this document applies to any platform.

Purpose

This document provides step-by-step instructions for upgrading Oracle Database and Oracle Grid Infrastructure from release 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 on Oracle Exadata Database Machine running Oracle Linux.

This document may also be used in conjunction with <Document 1908556.1> for upgrade of Oracle Database and Oracle Grid Infrastructure to 12.1.0.2 on Oracle Exadata Database Machine running Oracle Solaris x86-64 and Oracle SuperCluster running Oracle Solaris SPARC.  <Document 1908556.1> contains Solaris-specific requirements, recommendations, guidelines, and workarounds that pertain to upgrade of Oracle Database and Oracle Grid Infrastructure to 12.1.0.2.

Details

Oracle Exadata Database Machine Maintenance

11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.2 to 12.1.0.2 Upgrade for Oracle Linux

Overview

This document provides step-by-step instructions for upgrading Oracle Database and Oracle Grid Infrastructure from release 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 on Oracle Exadata Database Machine. Updates and additional patches may be required for your existing installation before upgrading to Oracle Database 12.1.0.2 (12c) and Oracle Grid Infrastructure 12.1.0.2 (12c).  The note box below provides a summary of the software requirements to upgrade.
Summary of software requirements to upgrade to Oracle Database 12c and Oracle Grid Infrastructure 12c
  1. Current Oracle Database and Grid Infrastructure version must be 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1.  Upgrades from 11.2.0.1 directly to 12.1.0.2 are not supported.
  2. For full Exadata functionality including  'Smart Scan offloaded filtering', 'storage indexes' and' I/O Resource Management' (IORM), Exadata Storage Server version 12.1.1.1.0 is required and 12.1.1.1.1 is recommended for Oracle Databases Release 12.1.0.2. 
    • Those not able to upgrade to Exadata 12.1.1.1.1 require a minimum version of 11.2.3.3.1 or later on Exadata Storage Servers and Database Servers.
    • See <document 1537407.1> for restrictions when running Oracle Database 12c in combination with Exadata releases earlier than 12.1.1.1.0
  3. Fix for bug 12539000 is required to successfully upgrade. 11.2.0.2 BP12 and later, 11.2.0.3, 11.2.0.4 and 12.1.0.1 already contain this fix. An interim patch must be installed for 11.2.0.2 Grid Infrastructure and Database installations running BP11 or earlier.
  4. Fix for bug 14639430 is required to properly rollback Grid Infrastructure upgrade, if necessary (not required when Grid Infrastructure already is on 11.2.0.4 or 12.1.0.1).
  5. Fix for bug 13460353 is required to create a new 11g database after Grid Infrastructure is upgraded to 12c. (not required when database already is on 11.2.0.4 or 12.1.0.1).
  6. When available: GI PSU 12.1.0.2.1 or later (which includes DB PSU 12.1.0.2.1). To be applied:
    • during the upgrade process, before running rootupgrade.sh on the Grid Infrastructure home, or
    • after installing the new Database home, before upgrading the database.
  7. Grid Infrastructure upgrades on nodes with different length hostnames in the same cluster require fix for bug 19453778 - CTSSD FAILED TO START WHILE RUNNING ROOTUPGRADE.SH. Contact Oracle Support to obtain the patch.
Solaris only: Review <Document 1908556.1>, section Oracle Solaris-specific software requirements and recommendations.
NOTE: Do not take action yet to meet these requirements.  Follow the detailed steps later in this document.


There are six main sections to the upgrade:

Section Overview
Prepare the Existing Environment
The software releases and patches installed in the current environment must be at certain minimum levels before upgrading to 12.1.0.2 can begin. Depending on the existing software installed, updates performed during this section may be performed in a rolling manner or may require database-wide downtime. In this section recommendations for storing baseline execution plans will be done as well as the advice to make sure database restores can be done in case there is a need to rollback. The preparation phase will detail on the required patches, where to download and stage them.
Install and Upgrade Grid Infrastructure to 12.1.0.2 Grid Infrastructure upgrades from 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 are always performed out of place and in a RAC rolling manner.
Install Database 12.1.0.2 Software Database 12.1.0.2 software installation is performed into a new ORACLE_HOME directory.  The installation is performed with no impact to running applications
Upgrade Database to 12.1.0.2
Database upgrades from 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 require database-wide downtime.

Rolling upgrade with (Transient) Logical Standby or Golden Gate may be used to reduce database downtime. Rolling upgrade with (Transient) Logical Standby or Golden Gate is not covered in this document.  For details on a transient logical rolling upgrade process see <Document 949322.1> Oracle11g Data Guard: Database Rolling Upgrade Shell Script.
Post-upgrade steps
Includes suggestions on required and optional next steps to perform following the upgrade, such as updating DBFS, performing a general health check, re-configuring for Cloud Control, and cleaning up the old, unused home areas. Instructions to re-configure Cloud Control are out of scope for this document.
Troubleshooting
Links to helpful troubleshooting documents

 

Conventions

  • The steps documented apply to 11.2.0.2, 11.2.0.3, 11.2.0.4 and 12.1.0.1 upgrades to 12.1.0.2 unless specified differently
  • New database home will be /u01/app/oracle/product/12.1.0.2/dbhome_1
  • New grid home will be /u01/app/12.1.0.2/grid
  • For recommended patches on top of 12.1.0.2 <Document 888828.1> needs to be consulted.

Assumptions

  • The database and grid software owner is oracle.
  • The Oracle inventory group is oinstall.
  • The files ~oracle/dbs_group and ~root/dbs_group exist and contain the names of all database servers.
  • Current database home is /u01/app/oracle/product/11.2.0/dbhome_1, this can be either an 11.2.0.2, 11.2.0.3, 11.2.0.4 or a 12.1.0.1 database home
  • Current Grid Infrastructure home can be either an 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 Grid Infrastructure home
  • The primary database to be upgraded is named PRIM.
  • The standby database associated with primary database PRIM is named STBY.
  • Additional to the Exadata specific steps mentioned in this document the user takes care of site specific database upgrade steps.
  • All Exachk recommended best practices, for example memory management (huge pages) and interconnect settings (not using HAIP) are implemented prior to the beginning of the upgrade.

References

Oracle Documentation

My Oracle Support Documents

      • <Document 888828.1> - Database Machine and Exadata Storage Server Supported Releases
      • <Document 1537407.1> - Requirements and restrictions when using Oracle Database 12c on Exadata Database Machine
      • <Document 1270094.1> - Exadata Critical Issues
      • <Document 1070954.1> - Oracle Exadata Database Machine exachk or HealthCheck
      • <Document 1054431.1> - Configuring DBFS on Oracle Database Machine
      • <Document 361468.1> - HugePages on Oracle Linux 64-bit
      • <Document 1284070.1> - Updating key software components on database hosts to match those on the cells
      • <Document 1281913.1> - Root Script Fails if ORACLE_BASE is set to /opt/oracle
      • <Document 1050908.1> - Troubleshoot Grid Infrastructure Startup Issues
      • <Document 1410202.1> - How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed
      • <Document 1520299.1> - Master Note For Oracle 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment
      • <Document 1462240.1> - Oracle 12cR1 Upgrade Companion
      • <Document 1515747.1> - Oracle Database 12c Release 1 (12.1) Upgrade New Features
      • <Document 1503653.1> - Complete Checklist for Manual Upgrades to 12c R1
      • <Document 1509653.1> - Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 and up) using DBMS_DST
      • <Document 1493645.1> - 12c Release1 DBUA : Understanding New Changes With All New 12.1 DBUA
      • <Document 556610.1> - Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql)
      • <Document 1683799.1> - 12.1.0.2 Patch Set - Availability and Known Issues

 

Prepare the Existing Environment

Here are the steps performed in this section.

  • Planning 
  • Review Database 12.1.0.2 Upgrade Prerequisites
  • Download and distribution of the required Files
  • Application of required patches
  • Run Exachk or HealthCheck (V1)
  • Validate Readiness for Oracle Clusterware upgrade

Planning

In relation to planning the following items are recommended:

Testing on non-production first

Upgrades or patches should always be applied first on test environments. Testing on non-production environments allows operators to become familiar with the patching steps and learn how the patching will impact system and applications. You need a series of carefully designed tests to validate all stages of the upgrade process. Executed rigorously and completed successfully, these tests ensure that the process of upgrading the production database is well understood, predictable, and successful. Perform as much testing as possible before upgrading the production database. Do not underestimate the importance of a complete and repeatable testing process. The types of tests to perform are the same whether you use Real Application Testing features like Database Replay or SQL Performance Analyzer, or perform testing manually.

There is an estimated downtime required of 30-90 minutes for the database upgrade. This varies on factors such as the amount of PL/SQL that requires recompilation. Additional downtime maybe required for post upgrade steps. 

Resource management plans are expected to be persistent after the upgrade.

SQL Plan Management

SQL plan management prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by providing components for capturing, selecting, and evolving SQL plan information. SQL plan management is a preventative mechanism that records and evaluates the execution plans of SQL statements over time, and builds SQL plan baselines composed of a set of existing plans known to be efficient. The SQL plan baselines are then used to preserve performance of corresponding SQL statements, regardless of changes occurring to the system. See the Oracle Database Performance Tuning Guide for more information about using SQL Plan Management

Recoverability

The ultimate success of your upgrade depends greatly on the design and execution of an appropriate backup strategy. Even though the Database Home and Grid Infrastructure Home will be upgraded out of place and therefore make rollback easier, the database and the filesystem should both be backed-up before committing the upgrade. See the Oracle Database Backup and Recovery Users Guide for information on database backups. For database servers running Oracle Linux, a procedure for creating a snapshot based backup of the database server partitions is documented in the Oracle Database Machine Owners guide, "Recovering a Linux-Based Database Server Using the Most-Recent Backup", however existing custom backup procedures can also be used.
NOTE: Additional to having a backup of the database it is recommended to create a Guaranteed Restore Point (GRP). As long as the database COMPATIBLE parameter is not changed after creating the GRP, the database can easily be flashed back after a (failed) upgrade. The database upgrade assistant (dbua) will also offer an option to create Guaranteed Restore Points or a database backup before proceeding the upgrade. Flashing back to a Guaranteed Restore Point will back out all changes in the database made after the creation of the Guaranteed Restore Point. If transactions are made after this point, then alternative methods must be employed to restore these transactions. Refer to the section 'Performing a Flashback Database Operation' in the 'Database Backup and Recovery User's Guide' for more information on flashing back a database. After a flashback the database needs to be opened in the 'Oracle home' from where the database was running before the upgrade.

Account Access

During the upgrade procedure access to database SYS account and operating system root and oracle user is required. Depending on what other components are upgraded access to ASMSNMP and DBSNMP is also required. Passwords in the password file are expected to be the same for all instances.

Review 12.1.0.2 Upgrade Prerequisites


The following prerequisites must be in place prior to performing the steps in this document to upgrade Database or Grid Infrastructure to 12.1.0.2 without failures.

Solaris only: Review <Document 1908556.1>, section Oracle Solaris-specific software requirements and recommendations.

Sun Datacenter InfiniBand Switch 36 is running software release 1.3.3-2 or later, 2.1.3-4 recommended

  • If you must update InfiniBand switch software to meet this requirement, then install the most recent release indicated in <Document 888828.1>.
    • Exadata releases later than 11.2.3.3.0 are expected run Infiniband Switch software 2.1.3-4
  • Does not apply to Exadata V1 Voltaire switches

Grid Infrastructure Software

For 11.2.0.2

  • A fix for bug 12539000 is required. This patch is included in BP12 onwards. For BP 7,8,9,10 and 11 one-off patches are available to be applied on top of your bundle patch
      • <patch 13374453> on top of BP7
      • <patch 13538371> on top of BP8 
      • <patch 12914750> on top of BP9
      • <patch 13404001> on top of BP10
      • <patch 13404018> on top of BP11
      • patch is included in BP12 and later

 

NOTE: This patch should be applied also to the database homes

 

  • A fix for (unpublished) bug 14639430. This fix is required only in case the Grid Infrastructure needs to be downgraded from 12.1.0.2 to 11.2.0.2.
    • This fix needs to exist in the 11.2.0.2 Grid Infrastructure Home to which is rolled-back to before the downgrade starts.
    • The fix for 14639430 can be made available as one-off for 11.2.0.2 installations. It is advised to request and apply the fix for your system before performing the upgrade.

For 11.2.0.3

  • A fix for (unpublished) bug 14639430. This fix is required only in case the Grid Infrastructure needs to be downgraded from 12.1.0.2 to 11.2.0.3. 
    • This fix needs to exist in the 11.2.0.3 Grid Infrastructure Home to which is rolled-back to before the downgrade starts.
      • The fix will be included in Exadata Database Bundle Patches starting 11.2.0.3 BP20 onwards (via GIPSU 7). It is the recommended approach to be on 11.2.0.3 BP20 or later before proceeding with the upgrade of the Grid Infrastructure.
      • For earlier releases a one-off will be made be available.

Database Software

For 11.2.0.2

  • A fix for (unpublished) bug 13460353. This fix is only required for those who will be creating new 11.2.0.2 databases while running a 12.1.0.2 Grid infrastructure.
    • This fix needs to be applied on top of the 11.2.0.2 database home before creating a new database. 
    • The fix for 13460353 will be made available as one-off for 11.2.0.2 installations. When available the patch needs to be applied on top of the 11.2.0.2  Database Home.
For 11.2.0.3
  • A fix for (unpublished) bug 13460353. This fix is required only for those who will be creating new 11.2.0.3 databases while running a 12.1.0.2 Grid infrastructure.
    • This fix is needs to be applied on top of the 11.2.0.3 database home before creating a new database. 
      • Fix is included in Exadata Database Bundle Patch 11.2.0.3 BP11 and higher. 
      • Check My Oracle Support for availability for earlier 11.2.0.3 releases.

Generic requirements

  • If you must update 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 databases or Grid Infrastructure software to meet the patching requirements then install the most recent release indicated in <document 888828.1>.
  • Apply all overlay and additional patches for the installed Bundle Patch when required. The list of required overlay and additional patches can be found in <Document 888828.1> and Exadata Critical Issues <Document 1270094.1>.
  • Verify that one-off patches currently installed on top of 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 are fixed in 12.1.0.2. Review the list of fixes provided with 12.1.0.2. For a list of provided fixes on top of 12.1.0.2 review the README. 
    • If you are unable to determine if a one-off patch is still required on top of 12.1.0.2 then contact Oracle Support.

 

NOTE: Grid Infrastructure upgrades on nodes with different length hostnames in the same cluster require fix for bug 19453778 - CTSSD FAILED TO START WHILE RUNNING ROOTUPGRADE.SH. Contact Oracle Support to obtain the patch.

 

Exadata Storage Server software recommended and minimum release 

  • For full Exadata functionality Exadata Storage Server version 12.1.1.1.0 is required (12.1.1.1.1 recommended) for Oracle Databases Release 12.1.0.2. 
    • Those not able to upgrade to Exadata 12.1.1.1.1 require a minimum version of 11.2.3.3.1 or later on Exadata Storage Servers and Database Servers.
  • If your database servers currently run Oracle Linux 5.3 (kernel release 2.6.18-128) then in order to maintain the recommended practice that OFED software release on database servers and Exadata Storage Servers is the same, then your database server must first be updated to run Oracle Linux 5.5 or later (kernel release 2.6.18-194 or later). Follow the steps in <Document 1284070.1> to perform this update.  

Do not place the new ORACLE_HOME under /opt/oracle.

  • If this is done then see <Document 1281913.1> for additional steps required after software is installed.

Data Guard only - If there is a physical standby database associated with the databases being upgraded, then the following must be true:

    • The standby database is running in real-time apply mode as determined by querying v$archive_dest_status and verifying recovery_mode='MANAGED REAL TIME APPLY' for the local archive destination on the standby database. If there is a delay or real-time apply is not enabled then see Data Guard Concepts and Administration guide on how to configure these setting and remove the delay.

Download Required Files

Based on the requirements determined earlier, download the following software into a staging area you prefer on one of the database servers in your cluster. As an example we use /u01/app/oracle/patchdepot but you can specify your own.

Data Guard - If there is a standby database then stage the files on one of the database servers from standby site also.
Files to be staged on first database server only:
  • Oracle Database 12c, Release 1 (12.1.0.2) <patch 17694377>:
    • "Oracle Database", typically the installation media for the database comes with two zip files.
    • "Oracle Grid Infrastructure" comes with two zip files.
  • For 11.2.0.2 (also see patch matrix later in this document)
    • Fix for 'Synchronization problem in the IPC state' unpublished bug 12539000 
    • Fix for (unpublished) bug 14639430. This fix is recommended and enables downgrades from 12.1.0.2 to 11.2.0.2 when required.
    • Fix for (unpublished) bug 13460353. This fix is optional and only required for those who will be creating new 11.2.0.2 databases while running a 12.1.0.2 Grid infrastructure
  • For 11.2.0.3 (also see patch matrix later in this document)
    • Fix for (unpublished) bug 14639430. This fix is recommended and enables downgrades from 12.1.0.2 to 11.2.0.3 when required.
    • Fix for (unpublished) bug 13460353. This fix is optional and only required for those who will be creating new 11.2.0.3 databases (BP10 or earlier) while running a 12.1.0.2 Grid infrastructure
  • Exadata Storage Server Software 12.1.1.1.1 
  • When available use latest GI PSU 12.1.0.2 patch 
    • To be applied on the 12.1.0.2 Grid Infrastructure home before running rootupgrade.sh, or
    • To be applied on the new database home before upgrading the database.
  • <Patch 6880880> - OPatch latest update for 11.2 and 12.1
    • Obtain the most recent OPatch 11.2 version for 11.2 Oracle Homes
    • Obtain the most recent OPatch 12.1 version for 12.1 Oracle Homes

Patch matrix for 12539000 - Required Patches when upgrading from 11.2.0.2 

Release**Linux
11.2.0.2  BP7 <patch 13374453>
11.2.0.2  BP8 <patch 13538371>
11.2.0.2  BP9 <patch 12914750>
11.2.0.2  BP10 <patch 13404001>
11.2.0.2  BP11 <patch 13404018>
11.2.0.2  BP12 included
 11.2.0.2  BP13  included
 11.2.0.2  BP14  included
11.2.0.2  BP15** included


**Installations not on one of the mentioned Bundle Patch releases in the list are recommended to either upgrade to a listed Bundle Patch release or request a fix for their release.
 

Patch matrix for 14639430 - patch requirements for rolling-back upgrades from 12.1.0.2 to 11.2.0.2 

11.2.0.2*Linux Patch on top of BPVia
11.2.0.2 BP16, BP17, BP18, BP19 Request via Oracle Support PSE 16985348 on top of BLR 16971708 for  GIPSU 6
11.2.0.2 BP20, BP21, BP22 Request via Oracle Support PSE 16984061 on top of BLR 16857104 for  GIPSU 10

* fixes required for bundle patches not listed need to be filed separately

Patch matrix for 14639430 - patch requirements for rolling-back upgrades from 12.1.0.2 to 11.2.0.3 

11.2.0.3*Linux Patch on top of BPVia
11.2.0.3 BP8, BP9, BP10 <patch 14639430> - p14639430_112033_Linux-x86-64.zip PSE 16973196 on top of BLR 16973093  for  GIPSU 3
11.2.0.3 BP11, BP12, BP13 <patch 14639430> - p14639430_112034_Linux-x86-64.zip PSE 16985475 on top of  BLR 16984137 for GIPSU 4
11.2.0.3 BP14, BP15, BP16 <patch 14639430> - p14639430_112035_Linux-x86-64.zip PSE 16985466 on top of  BLR 16984125 for GIPSU 5
11.2.0.3 BP17, BP18, BP19 <patch 14639430> - p14639430_112036_Linux-x86-64.zip PSE 16984077 on top of  BLR 16836125 for GIPSU 6
11.2.0.3 BP20 onwards Included in BP Included via GIPSU 7
* fixes required for bundle patches not listed need to be filed separately

Patch matrix for 13460353 - Optional patch for those who will be creating 11.2.0.2 databases while running a 12.1.0.2 Grid infrastructure

11.2.0.2*Linux Patch on top of BPVia
11.2.0.2 BP16, BP17, BP18, BP19 <patch 13460353> PSE 14691763 on top of  BLR 14618174 for GIPSU 6
11.2.0.2 BP20, BP21 Request via Oracle Support PSE 16984340 on top of  BLR 16984295 for GIPSU 10

* fixes required for bundle patches not listed need to be filed separately

Patch matrix for 13460353 - Optional patch for those who will be creating 11.2.0.3 databases while running a 12.1.0.2 Grid infrastructure

11.2.0.3*Linux Patch on top of BPVia
11.2.0.3 BP11 onwards Included in BP Included via GIPSU 4
* fixes required for bundle patches not listed need to be filed separately

Apply patches / updates where required before upgrading proceeds

Review <Document 1908556.1> for Oracle Solaris-specific software requirements and recommendations.

Update OPatch in existing 11.2 and 12.1 Grid Home and Database Homes on All Database Servers 

If the latest OPatch release is not in place and (bundle) patches need to be applied to an existing 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 Grid Infrastructure and Database homes before upgrading, then first update OPatch to the latest version for that database release. Execute the following command from one databases server to distribute OPatch to a staging area on all database servers and then to the Oracle Homes.


(oracle)$ dcli -l oracle -g ~/dbs_group -f p6880880_112000_Linux-x86-64.zip -d /u01/app/oracle/patchdepot
Note: OPatch for 12.1.0.2 can also be distributed but not yet copied to the 12.1.0.2 Oracle homes at this stage
Data Guard - If there is a standby database, then run these commands on the standby database servers also, as follows:
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/11.2.0/grid \
          /u01/app/oracle/patchdepot/p6880880_112000_Linux-x86-64.zip

(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/oracle/product/11.2.0/dbhome_1 \
          /u01/app/oracle/patchdepot/p6880880_112000_Linux-x86-64.zip

For Exadata Storage Servers and Database Servers on releases earlier than Exadata 11.2.3.3.1

  • For full Exadata functionality (including Smart Scan offloaded filtering and storage indexes and I/O Resource Management (IORM)) Exadata Storage Server version 12.1.1.1.0 or 12.1.1.1.1 is required for running Oracle Database Release 12.1.0.2. (12.1.1.1.1 is recommended) 
  • Those not able to upgrade to Exadata 12.1.1.1.1. require a minimum version of 11.2.3.3.1 on Exadata Storage Servers and Database Servers. 

For 11.2.0.2 Grid Infrastructure and Database:

  • Apply Patch for bug 12539000 ('Synchronization problem in the IPC state' ) to both Grid and Database home
  • If applicable, apply patch for (unpublished) bug 14639430. This patch needs to be applied on the source (11.2.0.2 Grid Infrastructure)
  • If applicable, apply patch for (unpublished) bug 13460353 - Only for those who will be creating 11.2.0.2 databases while running a 12.1.0.2 Grid infrastructure. This patch needs to be applied on the 11.2.0.2 database home

For 11.2.0.3 Grid Infrastructure and Database:

  • If applicable, apply Exadata Bundle Patch or patch for 11.2.0.3 including fix for (unpublished) bug 14639430. This patch needs to be applied on the source (11.2.0.3 Grid Infrastructure)
  • If applicable, apply Exadata Bundle Patch or patch for 11.2.0.3 including fix for (unpublished) bug 13460353 - Only for those who will be creating 11.2.0.3 databases while running a 12.1.0.2 Grid Infrastructure. This patch needs to be applied on the 11.2.0.3 database home

 

NOTE: For 11.2.0.2 (all bundle patches) and 11.2.0.3 bundle patche BP 11 and earlier, in order to create new 11.2.0.2 and 11.2.0.3 databases while running 12.1.0.2 Grid Infrastructure, a workaround is available. This work around eliminates the need to apply the fix for (unpublished) bug 13460353. The following command needs to be executed as root from the 12.1.0.2 Grid Infrastructure home before creating 11.2.0.2 or 11.2.0.3 (on releases earlier than BP11) databases:

(root)# crsctl modify type ora.database.type -attr "ATTRIBUTE=TYPE_VERSION,DEFAULT_VALUE=3.2" -unsupported
(root)# crsctl modify type ora.service.type -attr  "ATTRIBUTE=TYPE_VERSION,DEFAULT_VALUE=2.2" -unsupported
  
  • Data Guard: If there is a standby database, then run these commands on the standby database servers also.
  • Follow the patch README's for patching instructions.

Run Exachk or HealthCheck

For Exadata Database Machines V2 or later: run the latest release of Exachk to validate software, hardware, firmware, and configuration best practices. Resolve any issues identified by Exachk before proceeding with the upgrade.  Since Exachk is not certified on V1 (HP hardware), HealthCheck needs to be used to collect data regarding key software, hardware, and firmware releases. Review <Document 1070954.1> for details.

 

NOTE: It is recommended to run Exachk before and after the upgrade. When doing this, Exachk may find recommendations for the compatible settings for database, ASM, and diskgroup. At some point it is recommended to change compatible settings, but a conservative approach is advised. This is because changing compatible settings can result in not being able to downgrade/rollback later. It is therefore recommended to revisit compatible parameters some time after the upgrade has finished, when there is no chance for any downgrade and the system has been running stable for a longer period.

Validate Readiness for Oracle Clusterware upgrade using CVU and Exachck

Use the cluster verification utility (CVU) to validate readiness for the Oracle Clusterware upgrade. Review the Oracle Grid Infrastructure Installation Guide, 'How to Upgrade to Oracle Grid Infrastructure 12c Release 1' section 'Using CVU to Validate Readiness for Oracle Clusterware Upgrades'. Unzip the Clusterware installation zip file to the staging area. Before executing CVU as the owner of the Grid Infrastructure unset ORACLE_HOME, ORACLE_BASE and ORACLE_SID.

An example of running the pre-upgrade check, as follows:

(oracle)$   ./runcluvfy.sh stage -pre crsinst -upgrade  \
         -rolling -src_crshome  /u01/app/11.2.0.3/grid  \
                   -dest_crshome /u01/app/12.1.0.2/grid \
                -dest_version 12.1.0.2.0 -fixup -verbose
Note:
  • OS Kernel parameter checks may fail (unpublished bug 16777952) with errors like CRS-10051 and/or PRVG-1201. The same message may also be shown by OUI during Grid Infrastructure installation
    • If the checks fail a possible cause can be the lack of read permissions on /etc/sysctl.conf for others. This can be solved by running 'chmod o+r /etc/sysctl.conf' as root on all compute nodes. Remember to undo this change when the upgrade is finished.
  • Solaris only: Review <Document 1908556.1>, section Oracle Solaris-specific guidelines for GI software installation prerequisite check failure.
  • When hugepages are configured on the system, verify the value for memlock (in /etc/secirity/limits.conf) is set to at least 90% of physical memory. See the Oracle Database Installation Guide 12c Release 1 (12.1) for Linux for more details

Early stage pre-upgrade check: analyze your databases to be upgraded with the Pre-Upgrade Information Tool

At this stage it's recommended to do a first run of the pre-upgrade information tool so there is time to anticipate on possible required steps before proceeding the upgrade. The pre-upgrade tool is provided with the 12.1.0.2 software but since that is not installed at this moment the tool can be downloaded also via <document 884522.1> -  How to Download and Run Oracle's Database Pre-Upgrade Utility. Run this tool to analyze the 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 databases prior to the upgrade.

During the pre-upgrade steps, the pre-upgrade tool (preupgrd.sql) will warn to set the CLUSTER_DATABASE parameter to FALSE. However when using DBUA this is done automatically so the warning can be ignored.

Data Guard - If there is a standby database, run the command on one of the nodes of the standby database cluster also.

 

Install and Upgrade Grid Infrastructure to 12.1.0.2

The instructions in this section will perform the Grid Infrastructure software installation and upgrade to 12.1.0.2.  The Grid Infrastructure upgrade is performed in a RAC rolling fashion, this procedure does not require downtime.

Data Guard - If there is a standby database, then run these commands on the standby system separately to upgrade the standby system Grid Infrastructure.  The standby Grid Infrastructure upgrade can be performed in parallel with the primary if desired. However, the Grid Infrastructure home always needs to be on later or equal level than the Database home. Therefore upgrading Grid Infrastructure home needs to be done before a database upgrade can be performed.


Here are the steps performed in this section.

  • Install MGMTDB or not
  • Validate hugepage configuration for new ASM SGA requirements
  • Create a snapshot based backup of the database server partitions
  • Create the new GI_HOME directory where 12.1.0.2 will be installed
  • Prepare installation software
  • Perform 12.1.0.2 Grid Infrastructure software installation and upgrade using OUI
  • Apply latest Bundle Patch contents to Grid Infrastructure Home using 'opatch napply' (when available)
  • Change Custom Scripts and environment variables to Reference the 12.1.0.2 Grid Home

Install MGMTDB or not

Upgrades to 12.1.0.2 will by default see a 'management database' (MGMTDB) added to the Grid Infrastructure installation.

MGMTDB is a container database with one pluggable database in it running out of the Grid Infrastructure home.  The MGMTDB is configured to run like a Rac One Node database which means only one instance will be started on one of the nodes in the cluster.  When the Grid infrastructure of a particular node running MGMTDB is stopped or the node goes offline the MGMTDB instance will be started on one of the remaining nodes. When installed MGMTDB is configured to use 750 MB of SGA and 325 MB of PGA. SGA will not be using hugepages  (this is because it's init.ora setting 'use_large_pages' is set to false.) The chosen database name and SID will never collide with any database operators ever wants to create  because it has a special name '-MGMTDB'. Datafiles for this database will be stored in the same diskgroup as OCR and VOTE but may be relocated into another diskgroup post install. Initial space requirements for the datafiles is ~5GB. MGMTDB is caged by default to 4 cpu's but this can also be changed post upgrade.

MGMTDB will store a subset of Operating System (OS) performance data for longer term to provide diagnostic information and support intelligent workload management. Performance data (OS metrics similar to, but a subset of Exawatcher) collected by the 'Cluster Health Monitor'  (CHM) is stored also on local disk, so when not using MGMTDB, CHM data can still be obtained from local disk but intelligent workload management (QoS) will be disabled. MGMTDB is designed as a non-critical component of the Grid Infrastructure. This means that if MGMTDB would fail or become unavailable, Grid Infrastructure and it's critical components remain running. Longer term MGMTDB will become a key component of the Grid Infrastructure and provide services for important components, because of this MGMTDB will eventually become a mandatory component in future upgrades to releases on Exadata. 

In order for MGMTDB to run in combination with existing ASM and database instances, operators need to review memory allocations to make sure the upgrade will be successful. Critical Exadata systems with strict memory, cpu or disk allocations that don't have MGMTDB installed already and don't want change allocations of these resources as part of the upgrade to 12.1.0.2 are allowed and recommended to skip installation and configuration of MGMTDB. Steps on how to accomplish this will be explained later in this document. Not installing MGMTDB is only allowed for upgrades to 12.1.0.2. New 12.1.0.2 installations or upgrades to future releases (later than 12.1.0.2) will require MGMTDB to be installed. See <document 1568402.1> for more details.

Validate hugepage configuration for new ASM SGA requirements

As part of the Grid Infrastructure upgrade, the ASM SGA will be increased to a value of 2G. The new setting will require additional hugepages from the operating system. Make sure at least 1300 hugepages are configured for ASM to start during the upgrade process with the new value. If less than 1300 hugepages are configured the upgrade will fail. The extra hugepages should be added to the number of hugepages required for the existing databases to run. If not enough hugepages are configured to hold both ASM and databases (database configured to use hugepages only) the rootupgrade.sh script may not finish successfully. See <document 361468.1>  and <document 401749.1>  for more details on hugepages.

NOTE: when choosing to install and configure MGMTDB (which is the OUI default) be aware of extra hugepages required. See 'Install MGMTDB or not'

   

NOTE: Existing 11.2 ASM instances report the number of hugepages allocated in the alert.log. Substract this value from the 1300 to find out how much additional hugepages need to be added to the existing operating system configuration. 

Create a snapshot based backup of the database server partitions

Even while the Grid Infrastructure is being upgraded out-of-place, it is recommended to create a filesystem backup of the database server before proceeding. For database servers running Oracle Linux, steps for creating a snapshot based backup of the database server partitions are documented in the Oracle Database Machine Owners Guide, "Recovering a Linux-Based Database Server Using the Most-Recent Backup". Existing custom backup procedures can of also be used as an alternative.

Create the new Grid Infrastructure (GI_HOME) directory where 12.1.0.2 will be installed

In this document the new Grid Infrastructure home /u01/app/12.1.0.2/grid is used in all examples.  It is recommended that the new Grid Infrastructure home NOT be located under /opt/oracle.  If it is, then review <Document 1281913.1>. To create the new Grid Infrastructure home, run the following commands from the first database server.  You will need to substitute your Grid Infrastructure owner username and Oracle inventory group name in place of oracle and oinstall, respectively.
(root)# dcli -g ~/dbs_group -l root mkdir -p /u01/app/12.1.0.2/grid/
(root)# dcli -g ~/dbs_group -l root chown oracle /u01/app/12.1.0.2/grid
(root)# dcli -g ~/dbs_group -l root chgrp -R oinstall /u01/app/12.1.0.2/grid

Prepare installation software

Unzip all 12.1.0.2 software. Run the following command on the database server where the software is staged. An example for the Grid Infrastructure follows, but the same needs to be done for the database software.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_grid_1of2.zip \
          -d /u01/app/oracle/patchdepot
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_grid_2of2.zip \
          -d /u01/app/oracle/patchdepot

 

Change css misscount setting back to default before upgrading

Before proceeding the upgrade css miscount setting should be set to the default (of 30 seconds). The following command needs to be executed as oracle from the 11.2 Grid Infrastructure home before proceeding the upgrade: 

(oracle)$ crsctl unset css misscount  

Perform the 12.1.0.2 Grid Infrastructure software installation and upgrade using OUI

Perform these instructions as the Grid Infrastructure software owner (which is oracle in this document) to install the 12.1.0.2 Grid Infrastructure software and upgrade Oracle Clusterware and ASM from 11.2.0.2, 11.2.0.3 or 12.1.0.1 to 12.1.0.2. The upgrade begins with Oracle Clusterware and ASM running and is performed in a rolling fashion.  The upgrade process manages stopping and starting Oracle Clusterware and ASM and making the new 12.1.0.2 Grid Infrastructure Home the active Grid Infrastructure Home. For systems with a standby database in place this step can be performed either before, at the same time or after installation of Grid Infrastructure on the primary system.


To downgrade Oracle Clusterware back to the previous release: See "Downgrading Oracle Clusterware After an Upgrade" in the Oracle Grid Infrastructure Installation Guide

The OUI installation log is located at /u01/app/oraInventory/logs.
For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.
 
Set the environment then run the installer, as follows (for the below runInstaller command, only add extra the flag ' -J-Doracle.install.mgmtDB=false' when choosing not to install MGMTDB)

(oracle)$ unset ORACLE_HOME ORACLE_BASE ORACLE_SID
(oracle)$ export SRVM_USE_RACTRANS=true # note for csh run: setenv SRVM_USE_RACTRANS true
(oracle)$ export DISPLAY=<your_xserver>:0
(oracle)$ cd /u01/app/oracle/patchdepot/grid
(oracle)$ ./runInstaller -J-Doracle.install.mgmtDB=false
Starting Oracle Universal Installer...

  
NOTE: In the above steps "SRVM_USE_RACTRANS=true" was set because of the following Alert: On Linux, When Installing or Upgrading GI to 12.1.0.2, the Root Script on Remote Nodes May Fail with "error while loading shared libraries" Messages <document 2032832.1>
 
Perform the exact steps as described below on the installer screens:
      1. On "Select Installation Options" screen, select "Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management", and then click Next.
      2. On "Select Product Languages" screen, select "Languages", and then click Next.
      3. On "Grid Infrastructure Node Selection" screen, verify all database nodes are shown and selected, and then click Next.
      4. On "Specify Management Options" screen, specify Enterprise Management details when choosing for Cloud Control registration 
      5. On "Privileged Operating System Groups" screen, verify group names and change if desired, and then click Next.
      6. On "Specify Installation Location" screen, choose  "Oracle Base" and change the software location. Recommended software location: /u01/app/12.1.0.2/grid
      7. On "Root script execution" screen, do not check the box. Keep root execution in your own control
      8. On "Prerequisite Checks" screen, resolve any failed checks or warnings before continuing.
        1. Solaris only: Review <Document 1908556.1>, section Oracle Solaris-specific guidelines for GI software installation prerequisite check failure.
      9. On "Summary" screen, verify the plan and click 'Install' to start the installation (recommended to save a response file for the next time)

Before executing the last steps of the installation process an additional step is required:

  1. Update OPatch and when available apply the latest Bundle Patch on top of the 12.1.0.2 Grid Infrastructure Installation
  2. When required relink the 12.1.0.2 Grid Infrastructure oracle binary with RDS
If necessary: Install Latest OPatch 12.1
Now the 12.1.0.2 Grid Home directories are available. If recommended patches or bundles will be installed in the 12.1.0.2 Grid Home in the next step, then first update OPatch to the latest 12.1 version:
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/12.1.0.2/grid \
           /u01/app/oracle/patchdepot/p6880880_121010_Linux-x86-64.zip
 

If available: Apply recommended patches to the Grid Infrastructure before running rootupgrade.sh

Review <Document 888828.1> to identify and apply patches that must be installed on top of the Grid Infrastructure home just installed. 

For this step autopatch cannot be used and individual patches contained in the main patch zip-file need to be applied individually using napply.

For instructions: see  the section 'Patching a Software Only GI Home Installation or Before the GI Home Is Configured'  in <Document 1591616.1> (Supplemental Readme - Patch Installation and Deinstallation For 12.1.0.1.x GI PSU)

For example,  applying the individual patches for 19404326 should be done as follows on each node in the cluster (note: either apply or napply can be used):

(oracle)$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /u01/19404326/19189240/
(oracle)$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /u01/19404326/19392590/
(oracle)$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /u01/19404326/19392604/

If available: Apply 12.1.0.2 Bundle Patch Overlay Patches to the Grid Infrastructure Home as Specified in Document 888828.1

Review <Document 888828.1> to identify and apply patches that must be installed on top of the Bundle Patch just installed.

Apply Customer-specific 12.1.0.2 One-Off Patches to the Grid Infrastructure Home

If there are one-offs that need to be applied to the environment and they are approved by Oracle Support, then apply them now.

When required relink oracle binary with RDS

Verify the oracle binary is linked with the rds option (this is the default starting 11.2.0.4 but may not be effective on your system). The following command should return 'rds':

(oracle)$ dcli -l oracle -g ~/dbs_group /u01/app/12.1.0.2/grid/bin/skgxpinfo

If the command does not return 'rds' relink as follows:

For Linux: as owner of the Grid Infrastructure Home on all nodes execute the steps as follows before running rootupgrade.sh:

(oracle)$ dcli -l oracle -g ~/dbs_group ORACLE_HOME=/u01/app/12.1.0.2/grid \
make -C /u01/app/12.1.0.2/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle

Change SGA memory settings for ASM

As SYSASM, adjust sga_max_size and sga_target to a (minimum) value of 2G if not already done. The values will become active with a next start of the ASM instances.

SYS@+ASM1> alter system set sga_max_size = 2G scope=spfile sid='*';
SYS@+ASM1> alter system set sga_target = 2G scope=spfile sid='*';

Verify values for memory_target, memory_max_target and use_large_pages

Values should be as follows:

SYS@+ASM1> col sid format a5
SYS@+ASM1> col name format a30
SYS@+ASM1> col value format a30
SYS@+ASM1> set linesize 200

SYS@+ASM1> select sid, name, value from v$spparameter
where name in
('memory_target','memory_max_target','use_large_pages');

SID NAME VALUE
------ ------------------------- -----------------------------------
* use_large_pages TRUE
* memory_target 0
* memory_max_target

When the values are not as expected, change them as follows:

SYS@+ASM1> alter system set memory_target=0 sid='*' scope=spfile;
SYS@+ASM1> alter system set memory_max_target=0 sid='*' scope=spfile /* required workaround */;
SYS@+ASM1> alter system reset memory_max_target sid='*' scope=spfile;
SYS@+ASM1> alter system set use_large_pages=true sid='*' scope=spfile /* 11.2.0.2 and later (Linux only) */;

 

NOTE: increasing the SGA size will cause more hugepages to be used by ASM on a next instance startup. At this point it is assumed at least 1300 hugepages are configured for ASM to start properly during the upgrade process. Hugepages required for databases remain the same and need to be added to the value of 1300. Note that additional hugepages are also required when choosing for MGMGTDB.

Actions to take before executing rootupgrade.sh on each database server

1. Verify no active rebalance is running

Query gv$asm_operation to verify no active rebalance is running. A rebalance is running when the result of the following query is not equal to zero :


SYS@+ASM1> select count(*) from gv$asm_operation;

COUNT(*)
----------
0

2. Solaris only: Review <Document 1908556.1>, section Oracle Solaris-specific workaround required before running rootupgrade.sh

3. Nodes with different length hostnames in the same cluster require fix for bug 19453778 - CTSSD FAILED TO START WHILE RUNNING ROOTUPGRADE.SH. Contact Oracle Support to obtain the patch and place the files before running rootupgrade.sh

4. Only for installations having an IB listener (see note box on how to tell you have an IB Listener) configured on port 1521 on the IB network:

When earlier at the start of the 'Grid Infrastructure Upgrade'-process the decision was made to install MGMGTDB and MGMTDB wasn't installed previously, then temporary change your current IB listener to a (free) port other than the default of 1521. This workaround for (unpublished bug 19764388 ) allows the installation of MGMTDB (and allocation to port 1521) to succeed for installations already running a listener on port 1521 of the infiniband (IB) network. After running rootupgrade, change the MGMTDB listener to any other (free) port and restore the original IB listener port configuration. Example as follows:

Before running rootupgrade: temporary change of default IB listener port: 

(oracle)$ srvctl modify youriblistener -endpoints "TCP:1523"

 

NOTE: An IB listener is defined if an additional network on the IB interace is defined in the Grid Infrastructure. This can be dected by running the command 'srvctl config network'. That same network configuration would then be used to run an additional vip. This vip can be detected by running the command 'crsctl stat res'. The vip would then be used to run an additional listener which can be detected by running the command  'srvctl config listener'. If such a listener is configured it must be an IP listener. For more information see <document  1580584.1>

 

            

NOTE: if the above command fails, run instead: srvctl modify listener -l youriblistener -p TCP:1523

   

After running rootupgrade on all nodes, change MGMTDB listener to use port 1522 instead of 1521

(oracle)$ srvctl modify mgmtlsnr -endpoints "TCP:1522"

After running rootupgrade on all nodes, reconfigure the IB listener port to the original value:

(oracle)$ srvctl modify youriblistener -endpoints "TCP:1521"

Execute rootupgrade.sh on each database server

Execute rootupgrade.sh on each database server, as indicated in the Execute Configuration scripts screen the script must be executed on the local node first. The rootupgrade script shuts down the earlier release Grid Infrastructure installation, updates configuration details, and starts the new Grid Infrastructure installation. 

When rootupgrade fails it is recommended to check the following output first to get more details :

  • output of rootupgrade script itself
  • ASM alert.log
  • /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/rootcrs_<node_name>.log

 

After rootupgrade.sh completes successfully on the local node, you can run the script in parallel on other nodes except for the last node. When the script has completed successfully on all the nodes except the last node, run the script on the last node.  Do not run rootupgrade.sh on the last node until the script has run successfully on all other nodes.

 

  • First node rootupgrade.sh will complete with output similar to this example.
  • Last node rootupgrade.sh will complete with this output similar to this example.

 

Continue with 12.1.0.2 GI installation in OUI

10. On: "Execute configuration scripts" screen, when done press "OK"

11. On: "Finish", click "close"

Verify cluster status

Perform an extra check on the status of the Grid Infrastructure post upgrade by executing the following command from one of the compute nodes:
(root)# /u01/app/12.1.0.2/grid/bin/crsctl check cluster -all


The above command should show an online status for Cluster Ready Services, Cluster Synchronization Services and Event Manager on all nodes in the cluster, example as follows:

**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

When the cluster is not showing an online status for any of the components on any of the nodes, the issue needs to be researched before continuing. For troubleshooting see the MOS notes in the reference section of this note.

Change Custom Scripts and environment variables to Reference the 12.1.0.2 Grid Home

Customized administration, login scripts, static instance registrations in listener.ora files and CRS resources that reference the previous Grid Infrastructure Home should be updated to refer to new Grid infrastructure home '/u01/app/12.1.0.2/grid'.

For DBFS configurations is it recommended to review the chapter "Steps to Perform If Grid Home or Database Home Changes" in <Document 1054431.1> - "Configuring DBFS on Oracle Database Machine" as the shell script used to mount the DBFS filesystem may be located in the original Grid Infrastructure home and needs to be relocated. The following steps are performed to update the location of the CRS resource script to mount dbfs:

Modify the dbfs_mount cluster resource

Update the mount-dbfs.sh script and the ACTION_SCRIPT attribute of the dbfs-mount cluster resource to refer to the new location of mount-dbfs.sh. See 'Post-Upgrade Steps'

 

Install Database 12.1.0.2 Software

The steps in this section will perform the Database software installation of 12.1.0.2 into a new directory. 
This section only installs Database 12.1.0.2 software into a new directory, this does not affect currently running databases, hence all the steps below can be done without downtime.

Data Guard - If there is a separate system running a standby database and that system already has Grid Infrastructure upgraded to 12.1.0.2, then run these steps on the standby system separately to install the Database 12.1.0.2 software.  The steps in this section can be performed in any of the following ways:
  • Install Database 12.1.0.2 software on the primary system first then the standby system.
  • Install Database 12.1.0.2 software on the standby system first then the primary system.
  • Install Database 12.1.0.2 software on both the primary and standby systems simultaneously.

Here are the steps performed in this section.

  • Prepare Installation Software
  • Perform 12.1.0.2 Database Software Installation with OUI
  • Update OPatch in the new Grid Infrastructure home and New Database Home on All Database Servers
  • When available: Install Latest 12.1.0.2 Bundle Patch available for your operating system - Do Not Perform Post-Installation Steps
  • When available: Apply 12.1.0.2 Bundle Patch Overlay Patches as Specified in Document 888828.1
  • When available: Apply Customer-specific 12.1.0.2 One-Off Patches

Prepare Installation Software

Unzip the 12.1.0.2 database software.  Run the following command on the database server where the software is staged.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_database_1of2.zip -d /u01/app/oracle/patchdepot
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_database_2of2.zip -d /u01/app/oracle/patchdepot

Create the new Oracle DB Home directory on all database server nodes

(oracle)$ dcli -l oracle -g ~/dbs_group mkdir -p /u01/app/oracle/product/12.1.0.2/dbhome_1  

Perform 12.1.0.2 Database Software Installation with the Oracle Universal Installer (OUI)

The OUI installation log is located at /u01/app/oraInventory/logs.

Set the environment then run the installer, as follows:
  
Note: For OUI installations or execution of important scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.
 

(oracle)$ unset ORACLE_HOME ORACLE_BASE ORACLE_SID
(oracle)$ export DISPLAY=<your_xserver>:0
(oracle)$ cd /u01/app/oracle/patchdepot/database
(oracle)$ ./runInstaller
Perform the exact actions as described below on the installer screens:
  1. On "Configure Security Updates" screen, fill in required fields, and then click Next.
  2. On "Select Installation Option" screen, select 'Install database software only', and then click Next.
  3. On "Grid Installation Option", select "Oracle Real Application Clusters database installation" and click Next
  4. On "Node Selection" screen, verify all database servers in your cluster are present in the list and are selected, and then click Next.
  5. On "Select Product Languages" screen, select 'Languages', and then click Next.
  6. On "Select Database Edition", select 'Enterprise Edition', click Select Options to choose components to install, and then click Next.
  7. On "Installation Location", enter /u01/app/oracle as Oracle base and /u01/app/oracle/product/12.1.0.2/dbhome_1 as the Software Location for the Database home, and then click Next.
  8. On "Operating System Groups" screen, verify group names, and then click Next.
  9. On "Prerequisite Checks" screen, verify there are no failed checks or warnings
    1. Solaris only: Review <Document 1908556.1>, section Oracle Solaris-specific guidelines for GI software installation prerequisite check failure.
  10. On "Summary" screen, verify information presented about installation, and then click Install.
  11. On "Execute Configuration scripts screen, execute root.sh on each database server as instructed, and then click OK
  12. On "Finish screen", click Close.

When required relink Oracle Executable in Database Home with RDS

Verify the oracle binary is linked with the rds option (this is the default starting 11.2.0.4 but may not be effective on your system). The following command should return 'rds':

(oracle)$ dcli -l oracle -g ~/dbs_group /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/skgxpinfo

If the command does not return rds, relink as follows:

oracle)$ dcli -l oracle -g ~/dbs_group ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1 \
          make -C /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle 

If necessary: Install Latest OPatch 12.1 in the Database Home on All Database Servers

If recommended patches or bundles will be installed in the 12.1.0.2 Database Home in the next step, then first update OPatch to the latest 12.1 version:
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq \
          -d /u01/app/oracle/product/12.1.0.2/dbhome_1 \
          /u01/app/oracle/patchdepot/p6880880_121010_Linux-x86-64.zip

When available: Install the latest 12.1.0.2 GI PSU (which includes the DB PSU) to the Database Home when available - Do Not Perform Post-Installation Steps

At the time of writing this note 12.1.0.2 PSU's were not available. Review <Document 888828.1> for the latest release information and most recent patches and apply when available. Applying the latest PSU always requires the latest OPatch to be installed. Below is an example of the highlevel steps, always consult the specific patch README for current instructions.

Stage the patch

Unzip the patch on all database servers, as follows:
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/oracle/patchdepot \
          /u01/app/oracle/patchdepot/p17694377_121020_Linux-x86-64_XofY.zip 

Create OCM response file if required

If you do not have the OCM response file, then run the following command on each database server.
(oracle)$ cd /u01/app/oracle/patchdepot
(oracle)$ /u01/app/oracle/product/12.1.0.2/dbhome_1/OPatch/ocm/bin/emocmrsp

Patch 12.1.0.2 database home

Run the following command as root user only on the local node. Note there are no databases running out of this home yet. It is recommended to run this command from is a new session to make sure no settings from previous steps remain. Example as follows:

(root)# export PATH=$PATH:/u01/app/oracle/product/12.1.0.2/dbhome_1/OPatch
(root)# opatchauto apply <PATH_TO_PATCH_DIRECTORY> -oh <Comma separated Oracle home paths> -ocmrf <ocm response file>

Skip patch post-installation steps

Do not perform patch post-installation.  Patch post-installation steps will be run after the database is upgraded.

When available: Apply 12.1.0.2 Patch Overlay Patches to the Database Home as Specified in Document 888828.1

Review <Document 888828.1> to identify and apply patches that must be installed on top of the new Grid Infrastructure with the current Bundle Patch. If there are SQL command that must be run against the database as part of the patch application, postpone running the SQL commands until after the database is upgraded.

Apply Customer-specific 12.1.0.2 One-Off Patches

If there are one-offs that need to be applied to the environment and they are approved by Oracle Support, then apply them now.  If there are SQL statements that must be run against the database as part of the patch application, postpone running the SQL commands until after the database is upgraded.


 

Upgrade Database to 12.1.0.2

The commands in this section will perform the database upgrade to 12.1.0.2.

For Data Guard configurations, unless otherwise indicated, run these steps only on the primary database.

Here are the steps performed in this section.

    • Backing up the database and creating a Guaranteed Restore Point
    • Analyze the Database to Upgrade with the Pre-Upgrade Information Tool (if not done earlier)
    • Data Guard only - Synchronize Standby and Switch to 12.1.0.2
    • Data Guard only - Disable Fast-Start Failover and Data Guard Broker
    • Before starting the Database Upgrade Assistant (DBUA) stop and disable all services with PRECONNECT as option for 'TAF Policy specification'
    • Upgrade the Database with Database Upgrade Assistant
    • Review and perform steps in Oracle Upgrade Guide, 'Post-Upgrade Tasks for Oracle Database'
    • Change Custom Scripts and Environment Vvariables to Reference the 12.1.0.2 Database Home
    • Add Underscore Initialization Parameters back that were removed earlier
    • When available: run 12.1.0.2 Bundle Patch Post-Installation Steps
    • Data Guard only - Enable Fast-Start Failover and Data Guard Broker
    • Delete Guaranteed Restore Point created before the upgrade

The database will be inaccessible to users and applications during the upgrade (DBUA) steps. Course estimate of actual application downtime is 30-90 minutes but required downtime may depend on factors such as the amount of PL/SQL that needs recompilation. Note that it is not a requirement all databases are upgraded to the latest release. It is possible to have multiple releases of Oracle Database Homes running on the same system. The benefit of having multiple Oracle Homes is that multiple releases of different databases can run. The disadvantage is that more planned maintenance is required in terms of patching. Older database releases may lapse out of the regular patching lifecycle policy in time. Having multiple Oracle homes on the same node requires more disk space.

Backing up the database and creating a Guaranteed Restore Point

If not done already, before proceeding with the upgrade a full backup of the database should be made. Additional to this full back backup it is recommended to create a Guaranteed Restore Point (GRP). As long as the database COMPATIBLE parameter is not changed after creating the GRP, the database can be flashed back after a failed upgrade. In order to create a GRP the database must be in Archive Redo Log mode. The GRP can be created when the database is in OPEN mode as follows:

SYS@PRIM1> CREATE RESTORE POINT grpt_bf_upgr GUARANTEE FLASHBACK DATABASE; 

After creating the GRP, verify status as follows:

SYS@PRIM1> SELECT * FROM V$RESTORE_POINT where name = 'GRPT_BF_UPGR';

    

NOTE: After a successful upgrade the GRP should be deleted.

Analyze the Database to Upgrade with the Pre-Upgrade Information Tool

The pre-upgrade information tool is provided with the 12.1.0.2 software.  Run this tool to analyze the 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 databases prior to upgrade.

Run Pre-Upgrade Information Tool

At this point the database is still running with 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 software. Connect to the database with your environment set to 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 and run the pre-upgrade information tool that is located in the 12.1.0.2 database home, as follows:
SYS@PRIM1> spool preupgrade_info.log
SYS@PRIM1> @/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/admin/preupgrd.sql

During the pre-upgrade steps, the pre-upgrade tool (preupgrd.sql) will warn to set the CLUSTER_DATABASE parameter to FALSE. However when using DBUA this is done automatically so the warning can be ignored.

Handle obsolete and underscore parameters

Obsolete and underscore parameters will be identified by the pre-upgrade information tool.  During the upgrade, DBUA will remove the obsolete and underscore parameters from the primary database initialization parameter file.  Some underscore parameters that DBUA removes will be added back in later after DBUA completes the upgrade.

Data Guard only - DBUA will not affect parameters set on the standby, hence obsolete parameters and some underscore parameters must be removed manually if set. Typical values that need to be unset before starting the upgrade are as follows:
SYS@STBY1> alter system reset "_backup_ksfq_bufsz" scope=spfile;
SYS@STBY1> alter system reset "_backup_ksfq_bufcnt" scope=spfile;
SYS@STBY1> alter system reset "_lm_rcvr_hang_allow_time" scope=spfile;
SYS@STBY1> alter system reset "_kill_diagnostics_timeout" scope=spfile;
SYS@STBY1> alter system reset "_arch_comp_dbg_scan" scope=spfile;

Review pre-upgrade information tool output

Review the remaining output of the pre-upgrade information tool.  Take action on areas identified in the output.

Data Guard only - Synchronize Standby and Change the Standby Database to use the new 12.1.0.2 Database Home

Perform these steps only if there is a physical standby database associated with the database being upgraded.

As indicated in the prerequisites section above, the following must be true:
  • The standby database is running in real-time apply mode.
  • The value of the LOG_ARCHIVE_DEST_n database initialization parameter on the primary database that corresponds to the standby database must contain the DB_UNIQUE_NAME attribute, and the value of that attribute must match the DB_UNIQUE_NAME of the standby database.

Flush all redo generated on the primary and disable the broker

To ensure all redo generated by the primary database running 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 is applied to the standby database running 11.2.0.2, 11.2.0.3, 11.2.0.4, or 12.1.0.1 all redo must be flushed from the primary to the standby.
  
NOTE: If there are cascaded standbys in your configuration then those cascaded standbys must follow the same rules as any other standby but should be shut down last and restarted in the new home first.
  
First, verify the standby database is running recovery in real-time apply.  Run the following query connected to the standby database.  If this query returns no rows, then real-time apply is not running. Example as follows:
SYS@STBY1> select dest_name from v$archive_dest_status
  where recovery_mode = 'MANAGED REAL TIME APPLY';

DEST_NAME
------------------------------
LOG_ARCHIVE_DEST_2
Shutdown the primary database and restart just one instance in mount mode, as follows:
(oracle)$ srvctl stop database -d PRIM -o immediate
(oracle)$ srvctl start instance -d PRIM -n dm01db01 -o mount
Verify the primary database has specified db_unique_name of the standby database in the log_archive_dest_n parameter setting, as follows:
SYS@PRIM1> select value from v$parameter where name = 'log_archive_dest_2';

VALUE
-------------------------------------------------------------------------------
service="gih_stby" LGWR SYNC AFFIRM delay=0 optional compression=disable max_fa
ilure=0 max_connections=1 reopen=300 db_unique_name="STBY" net_timeout=30 valid
_for=(all_logfiles,primary_role)

Data Guard only - Disable Fast-Start Failover and Data Guard Broker

Disable Data Guard broker if it is configured as Data Guard broker is incompatible with running from different releases. If fast-start failover is configured, it must be disabled before broker configuration is disabled. Example as follows:

DGMGRL> disable fast_start failover;
DGMGRL> disable configuration;

Also, disable the init.ora setting dg_broker_start in both primary and standby as follows:

SYS@PRIM1> alter system set dg_broker_start = false;
SYS@STBY1> alter system set dg_broker_start = false;

Flush all redo to the standby database using the following command. Standby database db_unique_name in this example is 'STBY'. Monitor the alert.log of the standby database to verify for the  'End-of-Redo' message. Example as follows:
SYS@PRIM1> alter system flush redo to 'STBY';

Shutdown the primary database.

Wait until the 'End-of-Redo' on the standby is confirmed, as follows:


End-Of-REDO archived log file has not been recovered
Incomplete recovery SCN:0:1371457 archive SCN:0:1391461
Physical Standby did not apply all the redo from the primary.
Tue May 20 14:01:49 2013
Media Recovery Log +RECO/prim/archivelog/2011_11_22/thread_2_seq_39.1090.767883831
Identified End-Of-Redo (move redo) for thread 2 sequence 39 at SCN 0x0.153b65
Resetting standby activation ID 338172592 (0x14281ab0)
Media Recovery Waiting for thread 2 sequence 40
Tue May 20 14:01:50 2013
Standby switchover readiness check: Checking whether recoveryapplied all redo..
Physical Standby applied all the redo from the primary.
Standby switchover readiness check: Checking whether recoveryapplied all redo..
Physical Standby applied all the redo from the primary.


Then shutdown the primary database, as follows:

(oracle)$ srvctl stop database -d PRIM -o immediate

Shutdown the standby database and restart it in the 12.1.0.2 database home

Perform the following steps on the standby database server:

Shutdown the standby database, as follows:
(oracle)$ srvctl stop database -d stby
Copy required files from 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 database home to the 12.1.0.2 database home.
The following example shows the copying of the password file, but also other files like init.ora files may be required to copy:
(oracle)$ dcli -l oracle -g ~/dbs_group \
          'cp /u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwstby* \
          /u01/app/oracle/product/12.1.0.2/dbhome_1/dbs'
Edit standby environment files
  • Edit the standby database entry in /etc/oratab to point to the new 12.1.0.2 home.
  • On both the primary and standby database servers, ensure the tnsnames.ora entries are available to the database after it has been upgraded.  If using the default location for tnsnames.ora, $ORACLE_HOME/network/admin, then copy tnsnames.ora from the old home to the new home.
  • If using Data Guard Broker to manage the configuration, then modify the broker required SID_LIST listener.ora entry on all nodes to point to the new ORACLE_HOME.  For example
SID_LIST_LISTENER =
(SID_LIST =
  (SID_DESC =
  (GLOBAL_DBNAME=PRIM_dgmgrl)
  (SID_NAME = PRIM1)
  (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
  )
)
  • After this reload the listener on all nodes, as follows:
(oracle)$ lsnrctl reload listener
Update the OCR configuration data for the standby database by running the 'srvctl upgrade' command from the new database home as follows.
(oracle)$ srvctl upgrade database -d stby -o /u01/app/oracle/product/12.1.0.2/dbhome_1
Start the standby, as follows (add "-o mount" option for databases running Active Data Guard):
(oracle)$ srvctl start database -d stby

Start all primary instances in restricted mode

DBUA requires all RAC instances to be running from the current database home before starting the upgrade. To prevent an application from accidentally connecting to the primary database and performing work causing the standby to fall behind, startup the primary database in restricted mode, as follows:
(oracle)$ srvctl start database -d PRIM -o restrict
 

Upgrade the Database with Database Upgrade Assistant (DBUA)

NOTE: Before starting the Database Upgrade Assistant it is required change the preference for 'concurrent statistics gathering' on the current release if the current setting is not set to 'FALSE'.

First, while still on the 11.2. release, obtain the current setting:

SQL> SELECT dbms_stats.get_prefs('CONCURRENT') from dual;

When 'concurrent statistics gathering' is not not set to 'FALSE', change the value to 'FALSE before the upgrade. 

BEGIN
DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','FALSE');
END;
/


In both 11.2 as in 12.1 concurrency is disabled by default for both manual and automatic statistics gathering. If the database requires changing this value back to the original setting, do this after the upgrade. 


Reference bug: 18406728 (unpublished)

                                                                                                                                                                                                                                                                                            

NOTE: Before starting the database upgrade assistant all databases that will be upgraded having services configured with PRECONNECT as option for 'TAF Policy specification' should have these services  stopped and disabled. Once a database upgrade is completed, services can be enabled and brought online. Not disabling services having the PRECONNECT option for 'TAF Policy specification' will cause an upgrade to fail.

For each database being upgraded use the srvctl command to determine if a 'TAF policy specification' with 'PRECONNECT' is defined. Example as follows:

(oracle)$ srvctl config service -d <db_unique_name> | grep -i preconnect | wc -l
 

For each database being upgraded the output of the above command should be 0. When the output of the above command is not equal to 0, find the specific service(s) for which PRECONNECT is defined. Example as follows:
(oracle)$ srvctl config service -d <db_unique_name> -s <service_name>
 
Those services found need to be stopped and disabled before proceeding the upgrade. Example as follows:

(oracle)$ srvctl stop service -d <db_unique_name> -s "<service_name_list>"
(oracle)$ srvctl disable service -d <db_unique_name> -s "<service_name_list>"
 

Reference bug: 16539215
 
  
NOTE: Prior to running dbua and upgrading the primary database, remove DIAG_ADR_ENABLED=off from sqlnet.ora (or) either set sqlnet.ora parameter "DIAG_ADR_ENABLED" to ON (the default value). This is further described in <Document 1969684.1> "DBCA & DBUA Does not Identify Disk Groups When DIAG_ADR_ENABLED=OFF In sqlnet.ora"

 

  
Run DBUA to upgrade the primary database.  All database instances of the database you are upgrading must be brought up or DBUA may hang. If there is a standby database, the primary database should be left running in restricted mode, as performed in the previous step.
Oracle recommends removing the value for the init.ora parameter 'listener_networks' before starting DBUA. The value will be restored after running DBUA. Be sure to obtain the original value before removing, as follows:

SYS@PRIM1> set lines 200
SYS@PRIM1> select name, value from v$parameter where name='listener_networks';

If the value for parameter listener_networks was set, then the value needs to be removed as follows:

SYS@PRIM1> alter system set listener_networks='' sid='*' scope=both;
Run DBUA from the new 12.1.0.2 ORACLE_HOME as follows:
(oracle)$ /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/dbua
Perform these mandatory actions on the DBUA screens:
  1. On "Select Operation" screen, select "Upgrade Oracle Database" and then click Next
  2. On "Select Database" screen, select the source Oracle home and then select the database to be upgraded, and then click Next.
  3. On "Prerequisite Checks" screen, be sure all validation checks are passed. If required make appropriate changes and re-run validation, then click Next
  4. On "Upgrade Options" screen
    • Set Upgrade Parallelism  to 4
    • When not already done earlier :
      • Select "recompile invalid objects during post upgrade" with the suggested value for parallelism 
      • Select "Upgrade Timezone Data" - if it applies
      • Select "Gather Statistics Before Upgrade"
    • Use suggested file locations for "Diagnostic Destination" and "Audit File Destination"
  5. On "Management Options" screen,  select the Enterprise Manager option applicable to your environment and fill in the details when required, then click Next. 
  6. On "Recovery Options" screen, select an option to recover the database in case of upgrade problems
    • When backups were created earlier skip this step by selecting "I have my own backup and restore strategy"
    • When 'Guaranteed Restore Points' were created earlier select them now 'at Use Available Guaranteed Restore Point'
  7. On "Summary" screen, verify information presented about the database upgrade, and then click Finish.
  8. On "Progress" screen, when the upgrade is complete, click OK.
    • Choose 'Continue' / 'Ignore error' when you see the following error message: "ORA-01917: user or role 'ANONYMOUS' does not exist" (unpublished bug 18482096)
    • SuperCluster only: This step may fail with "ORA-00205 - error in identifying control file".  Do not exit DBUA.  Review <Document 1908556.1>, section Oracle Solaris-specific workaround required for DBUA on SuperCluster for workaround.
  9. On "Upgrade Results" screen, review the upgrade result and investigate logfiles and any failures, and then click Close.
The database upgrade to 12.1.0.2 is complete.  There are additional actions to perform to complete configuration. 
   

Review and perform steps in Oracle Upgrade Guide, Chapter 4 'Post-Upgrade Tasks for Oracle Database'

The Oracle Upgrade Guide documents required and recommended tasks to perform after upgrading to 12.1.0.2.  Since the database was upgraded from 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1, some tasks do not apply.  The following list is the minimum set of tasks that should be reviewed for your environment.
  • Update environment variables
  • Upgrade the Recovery Catalog
  • Upgrade the Time Zone File Version when not already done earlier by DBUA.
  • For upgrades done by DBUA tnsnames.ora entries for that particular database will be updated in the tnsnames.ora in the new home. However entries not related to a database upgrade or entries related to standby database will not be updated as part of any DBUA action. The synchronization of these entries needs to be done manually. Ifile directives used in tnsnames.ora, for example in the grid home, need to be updated to point to the new database home.

Change Custom Scripts and environment variables to Reference the 12.1.0.2 Database Home

The primary database is upgraded and is now running from the 12.1.0.2 database home. Customized administration and login scripts that reference database home ORACLE_HOME should be updated to refer to /u01/app/oracle/product/12.1.0.2/dbhome_1.

Underscore Initialization Parameters

On 8-socket machines such as X2-8/X3-8 only: verify NUMA settings:
SYS@PRIM1> select distinct(value) from gv$parameter where name = '_enable_NUMA_support';

When not set to TRUE for 8 socket machines such as X2-8 and X3-8, then set the value:

SYS@PRIM1> alter system set "_enable_NUMA_support"=TRUE sid='*' scope=spfile; 

Values for "_kill_diagnostics_timeout" and "_lm_rcvr_hang_allow_time" should not exist after the upgrade, run the following command to verify this:

SYS@PRIM1> select distinct(name), value
SYS@PRIM1> from  gv$parameter
SYS@PRIM1> where name in ('_kill_diagnostics_timeout','_lm_rcvr_hang_allow_time');

NAME                        VALUE
--------------------------------------------------------------------------------
_lm_rcvr_hang_allow_time    140
_kill_diagnostics_timeout   140
 
The values need to be removed if they still exist. Example as follows:
SYS@PRIM1> alter system reset "_kill_diagnostics_timeout" sid='*' scope=spfile;
SYS@PRIM1> alter system reset "_lm_rcvr_hang_allow_time" sid='*' scope=spfile;
 

The value for the init.ora parameter 'listener_networks' removed before the upgrade needs to be restored as follows:

SYS@PRIM1> alter system set listener_networks='<original value>' sid='*' scope=both;


Data Guard only - DBUA will not affect parameters set on the standby, hence previously set underscore parameters will remain in place.

Solaris only: Review <Document 1908556.1>, section Oracle Solaris-specific post-upgrade steps required on Exadata and SuperCluster.

For any parameter setup in the spfile only, be sure to restart the databases to make the settings effective.

When required: run PSU Post-Installation Steps

If a PSU installation was performed before the database was upgraded then post-installation steps may be required. See the PSU README for instructions (if any).
NOTE: be sure to check all objects are valid after running the Post-Installation Steps. If invalid objects are found run utlrp until no rows are returned

Data Guard only - Enable Fast-Start Failover and Data Guard Broker


Update the static listener entry in the listener.ora file on all nodes where a standby instance can run so that it reflects the new ORACLE_HOME used, as follows:

SID_LIST_LISTENER =
   (SID_LIST =
      (SID_DESC =
         (GLOBAL_DBNAME = STBY_DGMGRL)
         (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
         (SID_NAME = STBY1)
      )
   )
                   
If Data Guard broker and fast-start failover were disabled in a previous step, then re-enable them in SQL-Plus and dgmgrl, as follows:

SYS@PRIM1> alter system set dg_broker_start = true sid='*';
SYS@STBY1> alter system set dg_broker_start = true sid='*';
DGMGRL> enable configuration
DGMGRL> enable fast_start failover

 

Post-upgrade Steps

      Here are the steps performed in this section.
      • Remove Guaranteed Restore Point if still exists
      • DBFS only - Perform DBFS Required Updates
      • Run Exachk or HealthCheck
      • Verify nproc setting (CRS_LIMIT_NPROC) for Grid Infrastructure
      • Deinstall the 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 Database and Grid Homes
      • Re-configure RMAN Media Management Library
      • Restore settings for concurrent statistics gathering

Remove Guaranteed Restore Point 

If the upgrade has been successful and a Guaranteed Restore Point (GRP) was created, it should be removed now as follows:

SYS@PRIM1> DROP RESTORE POINT GRPT_BF_UPGR;

DBFS only - Perform DBFS Required Updates

When the DBFS database is upgraded to 12.1.0.2 the following additional actions are required:

Obtain latest mount-dbfs.sh script from Document 1054431.1

Download the latest mount-dbfs.sh script attached to <Document 1054431.1>, place it a (new) directory and update the CRS resource:
(oracle)$ dcli -l oracle -g ~/dbs_group mkdir -p /home/oracle/dbfs/scripts
(oracle)$ dcli -l oracle -g ~/dbs_group -f /u01/app/oracle/patchdepot/mount-dbfs.sh -d /home/oracle/dbfs/scripts
(oracle)$ crsctl modify resource dbfs_mount -attr "ACTION_SCRIPT=/home/oracle/dbfs/scripts/mount-dbfs.sh"

Edit mount-dbfs.sh script and Oracle Net files for the new 12.1.0.2 environment

Using the variable settings from the original mount-dbfs.sh script, edit the variable settings in the new mount-dbfs.sh script to match your environment.  The setting for variable ORACLE_HOME must be changed to match the 12.1.0.2 ORACLE_HOME (/u01/app/oracle/product/12.1.0.2/dbhome_1). 

Edit tnsnames.ora used for DBFS to change the directory referenced for the parameters PROGRAM and ORACLE_HOME to the new 12.1.0.2 database home.
fsdb.local =
(DESCRIPTION =
   (ADDRESS =
     (PROTOCOL=BEQ)
     (PROGRAM=/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle)
     (ARGV0=oraclefsdb1)
     (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
     (ENVS='ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1,ORACLE_SID=fsdb1')
   )
   (CONNECT_DATA=(SID=fsdb1))
)

If the location of Oracle Net files changed as a result of the upgrade, then change the setting of TNS_ADMIN in shell scripts and login files.
If using wallet-based authentication, recreate the symbolic link to /sbin/mount.dbfs. If you are using the Oracle Wallet to store the DBFS password, then run the following commands:
(root)# dcli -l root -g ~/dbs_group ln -sf \
/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/dbfs_client /sbin/mount.dbfs
(root)# dcli -l root -g ~/dbs_group ln -sf \
/u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libnnz11.so /usr/local/lib/libnnz11.so
(root)# dcli -l root -g ~/dbs_group ln -sf \
/u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
(root)# dcli -l root -g ~/dbs_group ldconfig

Run Exachk or HealthCheck

For V2 or later: Run Exachk again to validate software, hardware, firmware, and configuration best practice validation after the upgrade.Since Exachk is not certified on V1 hardware HealthCheck needs to be used to collect data regarding key software, hardware and firmware releases. Review <Document 1070954.1> for details. 

Verify nproc setting (CRS_LIMIT_NPROC) for Grid Infrastructure

Make sure the value of CRS_LIMIT_NPROC in configuration file $NEW_GI_HOME/crs/install/s_crsconfig_<hostname>_env.txt matches the number of processes (nproc) your system requires. Especially for 8 socket machines (x*-8) running many databases, nproc settings often require to be higher than the file's default of 16384. Use the (hard) value of nproc configured in /etc/security/limits.conf as guideline for configuring the value. See the Oracle Grid Infrastructure Installation Guide for more details. After updating the value on each node, restart the Grid Infrastructure for changes to take effect.

Optional: Deinstall the 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 Database and Grid Homes

After the upgrade is complete and the database and application have been validated and in use for some time, the 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 database and Grid homes can be removed using the deinstall tool. The . The deinstall tool will perform the deinstallation on all database servers.  Refer to Oracle Database Installation Guide for 11g or 12c for additional details of the deinstall tool. As described in the Oracle Database Installation Guide, the deinstall command removes standalone Oracle Database installations, Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) from your server, and also Oracle Real Application Clusters (Oracle RAC) and Oracle Database client installations. The deinstallation tool (deinstall) is available in the installation media before installation, and is available in Oracle home directories after installation. It is located in the $ORACLE_HOME/deinstall directory. If you require the deinstallation tool (deinstall) to remove failed or incomplete installations, then it is available as a separate download from the Oracle Technology Network (OTN) website.

Before running the deinstall tool to remove the old database and grid homes, run deinstall with the -checkonly option to verify the actions it will perform.  Ensure the following:
  • There are no databases configured to use the home.
  • The home is not a configured Grid Infrastructure home.
  • ASM is not detected in the Oracle Home.
  • Run these commands on the first database server only
To deinstall Database and Grid infrastructure, the example steps for an 11.2 database are as follows:
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p10098816_112020_LINUX_7of7.zip -d /u01/app/oracle/patchdepot

(oracle)$ cd /u01/app/oracle/patchdepot/deinstall
(oracle)$ ./deinstall -checkonly -home /u01/app/oracle/product/11.2.0/dbhome_1/
(oracle)$ ./deinstall -home /u01/app/oracle/product/11.2.0/dbhome_1/

(root)# dcli -l root -g ~/dbs_group chmod -R 755 /u01/app/11.2.0/grid
(root)# dcli -l root -g ~/dbs_group chown -R oracle /u01/app/11.2.0/grid
(root)# dcli -l root -g ~/dbs_group chown oracle /u01/app/11.2.0

(oracle)$ cd /u01/app/oracle/patchdepot/deinstall
(oracle)$ ./deinstall -checkonly -home /u01/app/11.2.0/grid/
(oracle)$ ./deinstall -home /u01/app/11.2.0/grid/

When not immediately deinstalling the previous Grid Infrastructure, rename the old Grid Home directory on all nodes such that operators cannot mistakenly execute crsctl commands from the wrong Grid Infrastructure Home.

Re-configure RMAN Media Management Library



Database installations that use an RMAN Media Management Library (MML) may require re-configuration of the Oracle Database Home after the upgrade. Most often recreating a symbolic link to vendor provided MML is sufficient.
For specific details see the MML documentation.

Restore settings for concurrent statistics gathering

When the preference for concurrent statistics gathering was changed to FALSE earlier in the process (before DBUA was started), then restore this setting now when required. Note that the 12.1 DEFAULT of 'OFF'.

Advance COMPATIBLE.ASM diskgroup attribute 

As a best practice and in order to create new databases with the password file stored in an ASM diskgroup, advance the COMPATIBLE.ASM parameter of your diskgroups to the Oracle ASM software version in use. For example:

ALTER DISKGROUP RECO SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';
ALTER DISKGROUP DBFS_DG SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';
ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';

 


 

Troubleshooting

 

Revision History

 

 Date

 Change

May 26 2017

  • Change css misscount back to default before proceeding the upgrade

Oct 7 2016

  • Adjustment of deinstall steps

May 18 2016

  • Added -unsupported flag for the crsctl modify command ref. <note 1918102.1>

Feb 1 2016

  • Added instructions for the issue described in <Document 1969684.1> "DBCA & DBUA Does not Identify Disk Groups When DIAG_ADR_ENABLED=OFF In sqlnet.ora

Oct 5 2015

  • Added note:  If there are cascaded standbys in your configuration then those cascaded standbys must follow the same rules as any other standby but should be shut down last and restarted in the new home first.

Aug 5 2015

  • removed _file_size_increase_increment=2143289344 because it's only needed for <= 11.2.0.3 BP11

July 27 2015

  • Added instructions for ALERT: On Linux, When Installing or Upgrading GI to 12.1.0.2, the Root Script on Remote Nodes May Fail with "error while loading shared libraries" Messages (Doc ID 2032832.1)

June 12 2015

  • Added details on how to detect an IB listener

April 1 2015

  • Added additional instructions to validate rds is linked in and how to enable when not.

March 19 2015

  • Added steps to advance COMPATIBLE.ASM attribute

Nov 5 2014

  • Added example on how to apply patch before running rootupgrade

Oct 27 2014

  • Added note for unpublished bug 18482096

Oct 13 2014

  • Added workaround for unpublished bug 19764388 

Sept 17 2014

  • Verify nproc setting (CRS_LIMIT_NPROC) for Grid Infrastructure

Sept 2 2014

  • Added reference to <document 1568402.1>

Aug 29 2014

  • Grid Infrastructure upgrades on nodes with different length hostnames in the same cluster require fix for bug 19453778

  • Added check for memlock minimum value

Jul 29 2014

  • Added patch number and ref to 1683799.1

Jul 22 2014

  • Publish

Jul 2 2014

  • Initial release completed

 

 

 

References

<NOTE:1580584.1> - Setup Listener on Infiniband Network using both SDP and TCP Protocol
<NOTE:2032832.1> - ALERT: On Linux, When Installing or Upgrading GI to 12.1.0.2, the Root Script on Remote Nodes May Fail with "error while loading shared libraries" Messages
<NOTE:1991928.1> - Grid Infrastructure root script (root.sh etc) fails as remote node missing binaries
<NOTE:1683799.1> - 12.1.0.2 Patch Set - Availability and Known Issues
<NOTE:1568402.1> - FAQ: 12c Grid Infrastructure Management Repository (GIMR)
<NOTE:1589394.1> - How to Move/Recreate GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc)
<NOTE:2012106.1> - 12.1 Grid Infrastructure Upgrade From 11.2 Fails when running rootupgrade.sh on Node#1 With error CLSRSC-180
<NOTE:1910022.1> - CLSRSC-214: Failed To Start 'ohasd' Running rootupgrade.sh On the Last Node due to Existing Checkpoint File
<NOTE:1908556.1> - 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 Grid Infrastructure and Database Upgrade on Exadata Database Machine running Oracle Solaris and Solaris Supercluster
<NOTE:1360798.1> - How to Complete Grid Infrastructure Configuration Assistant(Plug-in) if OUI is not Available
<NOTE:1918428.1> - TFA not Moved to New GI_HOME While Upgrading to 12.1.0.2

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback