Asset ID: |
1-79-1555059.1 |
Update Date: | 2017-05-25 |
Keywords: | |
Solution Type
Predictive Self-Healing Sure
Solution
1555059.1
:
11.2.0.2, 11.2.0.3 or 11.2.0.4 to 12.1.0.1 Grid Infrastructure and Database Upgrade on Exadata Database Machine
Related Items |
- Exadata X4-2 Hardware
- Exadata X3-2 Hardware
- Oracle Exadata Hardware
- Exadata X3-8 Hardware
- Oracle Exadata Storage Server Software
- Exadata Database Machine X2-2 Hardware
- Exadata Database Machine V2
|
Related Categories |
- PLA-Support>Eng Systems>Exadata/ODA/SSC>Oracle Exadata>DB: Exadata_EST
|
In this Document
Applies to:
Exadata Database Machine X2-2 Hardware - Version All Versions and later
Exadata X4-2 Hardware - Version All Versions and later
Exadata X3-8 Hardware - Version All Versions and later
Oracle Exadata Storage Server Software - Version 11.2.1.2.0 and later
Exadata Database Machine V2 - Version All Versions and later
Information in this document applies to any platform.
Purpose
This document provides step-by-step instructions for upgrading Oracle Database and Oracle Grid Infrastructure from release 11.2.0.2, 11.2.0.3 or 11.2.0.4 to 12.1.0.1 on Oracle Exadata Database Machine running Oracle Linux and Oracle Solaris on x86-64 database servers. This document does not cover upgrading Oracle Database and Oracle Grid Infrastructure to 12.1.0.1 on SPARC SuperCluster (SSC).
Details
Oracle Exadata Database Machine Maintenance
11.2.0.2, 11.2.0.3 or 11.2.0.4 to 12.1.0.1 Upgrade
Overview
This document provides step-by-step instructions for upgrading Oracle Database and Oracle Grid Infrastructure from release 11.2.0.2, 11.2.0.3 or 11.2.0.4 to 12.1.0.1 on Oracle Exadata Database Machine running Oracle Linux and Oracle Solaris on x86-64 database servers.
Updates and additional patches may be required for your existing installation before upgrading to Oracle Database 12c and Oracle Grid Infrastructure 12c. The note box below provides a summary of the software requirements to upgrade.
Summary of software requirements to upgrade to Oracle Database 12c and Oracle Grid Infrastructure 12c
- Current Oracle Database and Grid Infrastructure version must be 11.2.0.2, 11.2.0.3 or 11.2.0.4. Upgrades from 11.2.0.1 directly to 12.1.0.1 are not supported.
- Exadata Storage Server version 11.2.3.2.1 or later is required on Exadata Storage Servers and Database Servers
- Patch 16547261 is required on Exadata Storage Servers running 11.2.3.2.1
- Database servers running Oracle Linux Unbreakable Enterprise Kernel (UEK) 2.6.32-400.11.1.el5uek or 2.6.32-400.21.1.el5uek must be updated to 2.6.32-400.29.1.el5uek prior to 12.1.0.1 Grid Infrastructure installation
- Database servers running Oracle Solaris x86-64, must be at a minimum of 11.2.3.2.1 with SRU 8.5
- Fix for bug 12539000 is required to successfully upgrade. 11.2.0.2 BP12 and later, 11.2.0.3 and 11.2.0.4 already contain this fix. An interim patch must be installed for 11.2.0.2 Grid Infrastructure and Database installations running BP11 or earlier.
- Fix for bug 14639430 is required to properly rollback Grid Infrastructure upgrade, if necessary (not required when grid infrastructure already is on 11.2.0.4).
- Fix for bug 13460353 is required to create a new 11g database after Grid Infrastructure is upgraded to 12c. (not required when database already is on 11.2.0.4).
- Database servers running Oracle Solaris x86-64 require <patch 17065496> to be applied to the new Grid Infrastructure before running rootupgrade.sh
- <patch 17272829> - GI PSU 12.1.0.1.1 which includes DB PSU 12.1.0.1.1) for Oracle Linux and Oracle Solaris x86-64 Database Servers. To be applied:
- during the upgrade process, before running rootupgrade.sh on the Grid Infrastructure home, and
- after installing the new Database home, before upgrading the database.
NOTE: Do not take action yet to meet these requirements. Follow the detailed steps later in this document.
There are six main sections to the upgrade:
Section |
Overview |
Prepare the Existing Environment |
The software releases and patches installed in the current environment must be at certain minimum levels before upgrading to 12.1.0.1 can begin. Depending on the existing software installed, updates performed during this section may be performed in a rolling manner or may require database-wide downtime. In this section recommendations for storing base line execution plans will be done as well as the advice to make sure database restores can be done in case there is a need to rollback. The preparation phase will detail on the required patches, where to download and stage them.
|
Install and Upgrade Grid Infrastructure to 12.1.0.1 |
Grid Infrastructure upgrades from 11.2.0.2, 11.2.0.3 or 11.2.0.4 to 12.1.0.1 are always performed out of place and in a RAC rolling manner. |
Install Database 12.1.0.1 Software |
Database 12.1.0.1 software installation is performed into a new ORACLE_HOME directory. The installation is performed with no impact to running applications |
Upgrade Database to 12.1.0.1 |
Database upgrades from 11.2.0.2, 11.2.0.3 or 11.2.0.4 to 12.1.0.1 requires database-wide downtime.
Rolling upgrade with (Transient) Logical Standby or Golden Gate may be used to reduce database downtime. Rolling upgrade with (Transient) Logical Standby or Golden Gate is not covered in this document. For details on a transient logical rolling upgrade process see <Document 949322.1> Oracle11g Data Guard: Database Rolling Upgrade Shell Script.
|
Post-upgrade steps |
Includes both required and optional steps to perform following the upgrade, such as updating DBFS, performing a general health check, re-configuring for Cloud Control, and cleaning up the old, unused home areas.
|
Troubleshooting |
Links to helpful troubleshooting documents
|
Conventions
- The steps documented apply to 11.2.0.2, 11.2.0.3 and 11.2.0.4 upgrades to 12.1.0.1 unless specified differently
- New database home will be /u01/app/oracle/product/12.1.0.1/dbhome_1
- New grid home will be /u01/app/12.1.0.1/grid
- For recommended patches on top of 12.1.0.1 <Document 888828.1> needs to be consulted.
Assumptions
- The database and grid software owner is oracle.
- The Oracle inventory group is oinstall.
- The files ~oracle/dbs_group and ~root/dbs_group exist and contain the names of all database servers.
- Current database home is /u01/app/oracle/product/11.2.0/dbhome_1, this can be either an 11.2.0.2 or an 11.2.0.3 database home
- Current grid home can be either an 11.2.0.2, 11.2.0.3 or an 11.2.0.4 Grid Infrastructure home
- The primary database to be upgraded is named PRIM.
- The standby database associated with primary database PRIM is named STBY.
- Additional to the Exadata specific steps mentioned in this document the user takes care of site specific database upgrade steps
- All Exachk recommended best practices, for example memory management (huge pages) and interconnect settings (not using HAIP) are implemented prior to the beginning of the upgrade.
References
Oracle Documentation
My Oracle Support Documents
- <Document 888828.1> - Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Releases
- <Document 1537407.1> - Requirements and restrictions when using Oracle Database 12c on Exadata Database Machine
- <Document 1270094.1> - Exadata Critical Issues
- <Document 1070954.1> - Oracle Exadata Database Machine exachk or HealthCheck
- <Document 1054431.1> - Configuring DBFS on Oracle Database Machine
- <Document 361468.1> - HugePages on Oracle Linux 64-bit
- <Document 1284070.1> - Updating key software components on database hosts to match those on the cells
- <Document 1281913.1> - Root Script Fails if ORACLE_BASE is set to /opt/oracle
- <Document 1050908.1> - Troubleshoot Grid Infrastructure Startup Issues
- <Document 1410202.1> - How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed
- <Document 1520299.1> - Master Note For Oracle 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment
- <Document 1462240.1> - Oracle 12cR1 Upgrade Companion
- <Document 1515747.1> - Oracle Database 12c Release 1 (12.1) Upgrade New Features
- <Document 1503653.1> - Complete Checklist for Manual Upgrades to 12c R1
- <Document 1509653.1> - Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 and up) using DBMS_DST
- <Document 1493645.1> - 12c Release1 DBUA : Understanding New Changes With All New 12.1 DBUA
- <Document 556610.1> - Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql)
- <Document 1565082.1> - 12.1.0.1 Patch Set - Availability and Known Issues
Prepare the Existing Environment
Here are the steps performed in this section.
- Planning
- Review Database 12.1.0.1 Upgrade Prerequisites
- Download and distribution of the required Files
- Application of required patches
- Run Exachk or HealthCheck (V1)
- Validate Readiness for Oracle Clusterware upgrade
Planning
In relation to planning the following items are recommended:
Testing on non-production first
Upgrades or patches should always be applied first on test environments. Testing on non-production environments allows people to become familiar with the patching steps and learn how the patching will impact their system and applications. You need a series of carefully designed tests to validate all stages of the upgrade process. Executed rigorously and completed successfully, these tests ensure that the process of upgrading the production database is well understood, predictable, and successful. Perform as much testing as possible before upgrading the production database. Do not underestimate the importance of a complete and repeatable testing process. The types of tests to perform are the same whether you use Real Application Testing features like Database Replay or SQL Performance Analyzer, or perform testing manually.
There is an estimated downtime required of 30-90 minutes for the database upgrade. Additional downtime maybe required for post upgrade steps. This varies on factors such as the amount of PL/SQL that requires recompilation.
Resource management plans are expected to be persistent after the upgrade.
SQL Plan Management
SQL plan management prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by providing components for capturing, selecting, and evolving SQL plan information. SQL plan management is a preventative mechanism that records and evaluates the execution plans of SQL statements over time, and builds SQL plan baselines composed of a set of existing plans known to be efficient. The SQL plan baselines are then used to preserve performance of corresponding SQL statements, regardless of changes occurring in the system. See the
Oracle Database Performance Tuning Guide for more information about using SQL Plan Management
Recoverability
The ultimate success of your upgrade depends greatly on the design and execution of an appropriate backup strategy. Even though the Database Home and Grid Infrastructure Home will be upgraded out of place and therefore make rollback easier, the database and the filesystem should both be backed-up before committing the upgrade. See the
Oracle Database Backup and Recovery Users Guide for information on database backups. A procedure for creating a snapshot based backup of the database server partitions is documented in chapter 7 of the Oracle Database Machine Owners guide, "Recovering a Linux-Based Database Server Using the Most-Recent Backup", however existing custom backup procedures can of also be used.
NOTE: Additional to having a backup of the database it is recommended to create a Guaranteed Restore Point (GRP). As long as the database COMPATIBLE parameter is not changed after creating the GRP, the database can easily be flashed back after a (failed) upgrade. The database upgrade assistant (dbua) will also offer an option to create Guaranteed Restore Points or a database backup before proceeding the upgrade. Flashing back to a Guaranteed Restore Point will back out all changes in the database made after the creation of the Guaranteed Restore Point. If transactions are made after this point, then alternative methods must be employed to restore the transactions. Refer to the section 'Performing a Flashback Database Operation' in the 'Database Backup and Recovery User's Guide' for more information on flashing back a database. After a flashback the database needs to be opened in the 'Oracle home' from where the database was running before the upgrade.
Account Access
During the upgrade procedure access to database SYS account and operating system root and oracle user is required. Depending on what other components are upgraded access to ASMSNMP and DBSNMP is also required. Passwords in the password file are expected to be the same for all instances.
Review 12.1.0.1 Upgrade Prerequisites
The following prerequisites must be in place prior to performing the steps in this document to upgrade Database or Grid Infrastructure to 12.1.0.1 without failures
Sun Datacenter InfiniBand Switch 36 is running software release 1.3.3-2 or later.
- If you must update InfiniBand switch software to meet this requirement, then install the most recent release indicated in <Document 888828.1>.
- Does not apply to Exadata V1 Voltaire switches
Grid Infrastructure Software
For 11.2.0.2
- A fix for bug 12539000 is required. This patch is included in BP12 onwards. For BP 7,8, 9,10 and 11 one-off patches are available to be applied on top of your bundle patch
- <patch 13374453> on top of BP7
- <patch 13538371> on top of BP8
- <patch 12914750> on top of BP9
- <patch 13404001> on top of BP10
- <patch 13404018> on top of BP11
- patch is included in BP12 for Linux (does not apply for Solaris)
- patch is included in BP13 and higher
NOTE: This patch should be applied also to the database homes
- A fix for (unpublished) bug 14639430. This fix is required only in case a Grid Infrastructure needs to be downgraded from 12.1.0.1 to 11.2.0.2.
- This fix needs to exist in the 11.2.0.2 Grid Infrastructure Home to which is rolled-back to before the downgrade starts.
- The fix for 14639430 can be made available as one-off for 11.2.0.2 installations. It is advised to request and apply the fix for your system before performing the upgrade.
For 11.2.0.3
- A fix for (unpublished) bug 14639430. This fix is required only in case a Grid Infrastructure needs to be downgraded from 12.1.0.1 to 11.2.0.3.
- This fix needs to exist in the 11.2.0.3 Grid Infrastructure Home to which is rolled-back to before the downgrade starts.
- The fix will be included in Exadata Database Bundle Patches starting 11.2.0.3 BP20 onwards (via GIPSU 7). It is the recommended approach to be on 11.2.0.2 BP20 or later before proceeding with the upgrade of the Grid Infrastructure.
- For earlier releases a one-off will be made be available.
Database Software
For 11.2.0.2
- A fix for (unpublished) bug 13460353. This fix is only required for those who will be creating new 11.2.0.2 databases while running a 12.1.0.1 Grid infrastructure.
- This fix needs to be applied on top of the 11.2.0.2 database home before creating a new database.
- The fix for 13460353 will be made available as one-off for 11.2.0.2 installations. When available the patch needs to be applied on top of the 11.2.0.2 Database Home.
For 11.2.0.3
- A fix for (unpublished) bug 13460353. This fix is required only for those who will be creating new 11.2.0.3 databases while running a 12.1.0.1 Grid infrastructure.
- This fix is needs to be applied on top of the 11.2.0.3 database home before creating a new database.
- Fix will be included in Exadata Database Bundle Patch 11.2.0.3 starting BP11 onwards.
- For earlier releases a one-off will be made be available.
Generic requirements
- If you must update 11.2.0.2, 11.2.0.3 or 11.2.0.4 databases or 11.2.0.2, 11.2.0.3 or 11.2.0.4 Grid Infrastructure software to meet the patching requirements then install the most recent release indicated in <document 888828.1>.
- Apply all overlay and additional patches for the installed Bundle Patch when required. The list of required overlay and additional patches can be found in <Document 888828.1> and Exadata Critical Issues <Document 1270094.1>.
- Verify that one-off patches currently installed on top of 11.2.0.2, 11.2.0.3 or 11.2.0.4 are fixed in 12.1.0.1. Review the list of fixes provided with 12.1.0.1. For a list of provided fixes on top of 12.1.0.1 review the README.
- If you are unable to determine if a one-off patch is still required on top of 12.1.0.1 then contact Oracle Support.
Exadata Storage Server software release 11.2.3.2.1 for Exadata database servers and Exadata storage servers with patch 16547261 applied on Exadata Storage Servers
- If you must update Exadata Storage Server software to meet this requirement then install the most recent release indicated in <Document 888828.1>. When using Exadata Storage Server release 11.2.3.2.1, be sure to also apply <patch 16547261> on the Exadata Storage Servers
- If your database servers currently run Oracle Linux 5.3 (kernel release 2.6.18-128) then in order to maintain the recommended practice that OFED software release on database servers and Exadata Storage Servers is the same, then your database server must first be updated to run Oracle Linux 5.5 or later (kernel release 2.6.18-194 or later). Follow the steps in <Document 1284070.1> to perform this update. Note that updating to Oracle Linux to 5.5 is not required but highly recommended.
- Database servers running Oracle Solaris, must be at SRU 8.5 or later
Exadata database servers running 11.2.3.2.1 on Linux using must be running one of the following kernels prior to 12c Grid Infrastructure installation:
Currently installed Oracle Linux kernel on database servers | Action required before 12.1.0.1 Grid Infrastructure upgrade |
2.6.32-400.11.1.el5uek - Not supported for upgrades |
Update to 2.6.32-400.29.1.el5uek prior to 12.1.0.1 Grid Infrastructure installation |
2.6.32-400.21.1.el5uek - Not supported for upgrades |
Update to 2.6.32-400.29.1.el5uek prior to 12.1.0.1 Grid Infrastructure installation |
2.6.18-308.24.1.0.1.el5 |
No action needed |
Do not place the new ORACLE_HOME under /opt/oracle.
- If this is done then see <Document 1281913.1> for additional steps required after software is installed.
Data Guard only - If there is a physical standby database associated with the databases being upgraded, then the following must be true:
- The standby database is running in real-time apply mode as determined by querying v$archive_dest_status and verifying recovery_mode='MANAGED REAL TIME APPLY' for the local archive destination on the standby database. If there is a delay or real-time apply is not enabled then see Data Guard Concepts and Administration guide on how to configure these setting and remove the delay.
Download Required Files
Based on the requirements determined earlier, download the following software into a staging area you prefer on one of the database servers in your cluster. As an example we use /u01/app/oracle/patchdepot but you can specify your own.
Data Guard - If there is a standby database then stage the files on one of the database servers from standby site also.
In the examples that follow the Linux installation files will be used, however, for Solaris installations the mentioned files should be replaced for its Solaris equivalent.
Files to be staged on first database server only:
- Oracle Database 12c, Release 1 (12.1.0.1) :
- "Oracle Database", typically the installation media for the database comes with two zip files.
- "Oracle Grid Infrastructure" comes with two zip files.
- For 11.2.0.2 (also see patch matrix later in this document)
- Fix for 'Synchronization problem in the IPC state' unpublished bug 12539000
- Fix for (unpublished) bug 14639430. This fix is recommended and enables downgrades from 12.1.0.1 to 11.2.0.2 when required.
- Fix for (unpublished) bug 13460353. This fix is optional and only required for those who will be creating new 11.2.0.2 databases while running a 12.1.0.1 Grid infrastructure
- For 11.2.0.3 (also see patch matrix later in this document)
- Fix for (unpublished) bug 14639430. This fix is recommended and enables downgrades from 12.1.0.1 to 11.2.0.3 when required.
- Fix for (unpublished) bug 13460353. This fix is optional and only required for those who will be creating new 11.2.0.3 databases (BP10 or earlier) while running a 12.1.0.1 Grid infrastructure
- Exadata Storage Server Software 11.2.3.2.1 (<patch 14522699>) + <patch 16547261>
- <patch 16547261>
- To be applied on the 12.1.0.1 Grid Infrastructure home for Solaris x86-64 database servers before running rootupgrade.sh
- <patch 17272829> - GI PSU 12.1.01.1 which includes DB PSU 12.1.0.1.1
- To be applied on the 12.1.0.1 Grid Infrastructure home for database servers running Oracle Linux and Oracle Solaris x86-64 before running rootupgrade.sh, and
- To be applied on the new database home before upgrading the database.
- Database servers running Oracle Linux Unbreakable Enterprise Kernel (UEK) 2.6.32-400.11.1.el5uek or 2.6.32-400.21.1.el5uek must be updated to 2.6.32-400.29.1.el5uek prior to 12.1.0.1 Grid Infrastructure installation due to <bug 16463033>. In order to perform the kernel update to 2.6.32-400.29.1.el5uek obtain the following RPMs and place them in the '/u01/app/oracle/patchdepot' directory:
- When available: download the latest Bundle Patch for 12.1.0.1
- <Patch 6880880> - OPatch latest update for 11.2 and 12.1.0.1
- p6880880_112000_Linux-x86-64.zip (for 11.2 Oracle Homes)
- p6880880_121010_Linux-x86-64.zip (for 12.1 Oracle Homes)
Patch matrix for 12539000 - Required Patches when upgrading from 11.2.0.2 Linux and Solaris x86-64
Release** | Linux | Solaris |
11.2.0.2 BP7 |
<patch 13374453> |
N/A |
11.2.0.2 BP8 |
<patch 13538371> |
N/A |
11.2.0.2 BP9 |
<patch 12914750> |
<patch 12914750> |
11.2.0.2 BP10 |
<patch 13404001> |
<patch 13404001> |
11.2.0.2 BP11 |
<patch 13404018> |
<patch 13404018> |
11.2.0.2 BP12 |
included |
Does not apply* |
11.2.0.2 BP13 |
included |
Included |
11.2.0.2 BP14 |
included |
Included |
11.2.0.2 BP15** |
included |
Included |
* BP13 is recommend minimum for Solaris
**Installations not on one of the mentioned Bundle Patch releases in the list are recommended to either upgrade to a listed Bundle Patch release or request a fix for their release.
Patch matrix for 14639430 - patch requirements for rolling-back upgrades from 12.1.0.1 to 11.2.0.2 Linux
11.2.0.2* | Linux Patch on top of BP | Via |
11.2.0.2 BP16, BP17, BP18, BP19 |
Request via Oracle Support |
PSE 16985348 on top of BLR 16971708 for GIPSU 6 |
11.2.0.2 BP20, BP21, BP22 |
Request via Oracle Support |
PSE 16984061 on top of BLR 16857104 for GIPSU 10 |
* fixes required for bundle patches not listed need to be filed separately
Patch matrix for 14639430 - patch requirements for rolling-back upgrades from 12.1.0.1 to 11.2.0.3 Linux
11.2.0.3* | Linux Patch on top of BP | Via |
11.2.0.3 BP8, BP9, BP10 |
<patch 14639430> - p14639430_112033_Linux-x86-64.zip |
PSE 16973196 on top of BLR 16973093 for GIPSU 3 |
11.2.0.3 BP11, BP12, BP13 |
<patch 14639430> - p14639430_112034_Linux-x86-64.zip |
PSE 16985475 on top of BLR 16984137 for GIPSU 4 |
11.2.0.3 BP14, BP15, BP16 |
<patch 14639430> - p14639430_112035_Linux-x86-64.zip |
PSE 16985466 on top of BLR 16984125 for GIPSU 5 |
11.2.0.3 BP17, BP18, BP19 |
<patch 14639430> - p14639430_112036_Linux-x86-64.zip |
PSE 16984077 on top of BLR 16836125 for GIPSU 6 |
11.2.0.3 BP20 onwards |
Included in BP |
Included via GIPSU 7 |
* fixes required for bundle patches not listed need to be filed separately
Patch matrix for 13460353 - Optional patch for those who will be creating 11.2.0.2 databases while running a 12.1.0.1 Grid infrastructure
11.2.0.2* | Linux Patch on top of BP | Via |
11.2.0.2 BP16, BP17, BP18, BP19 |
<patch 13460353> |
PSE 14691763 on top of BLR 14618174 for GIPSU 6 |
11.2.0.2 BP20, BP21 |
Request via Oracle Support |
PSE 16984340 on top of BLR 16984295 for GIPSU 10 |
* fixes required for bundle patches not listed need to be filed separately
Patch matrix for 13460353 - Optional patch for those who will be creating 11.2.0.3 databases while running a 12.1.0.1 Grid infrastructure
11.2.0.3* | Linux Patch on top of BP | Via |
11.2.0.3 BP11 onwards |
Included in BP |
Included via GIPSU 4 |
* fixes required for bundle patches not listed need to be filed separately
Apply patches / updates where required before upgrading proceeds
Update OPatch in existing 11.2 Grid Home and existing 11.2 Database Homes on All Database Servers
If the latest OPatch release is not in place and (bundle) patches need to be applied on an existing 11.2.0.2, 11.2.0.3 or 11.2.0.4 Grid Infrastructure and Database homes before upgrading, then first update OPatch to the latest release. Execute the following command from one databases server to distribute OPatch to a staging area on all database servers and then to the Oracle Homes.
(oracle)$ dcli -l oracle -g ~/dbs_group -f p6880880_112000_Linux-x86-64.zip -d /u01/app/oracle/patchdepot
Note: OPatch 12.1 can also be distributed but not yet copied to the Oracle homes at this stage
Data Guard - If there is a standby database, then run these commands on the standby database servers also, as follows:
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/11.2.0/grid \
/u01/app/oracle/patchdepot/p6880880_112000_Linux-x86-64.zip
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/oracle/product/11.2.0/dbhome_1 \
/u01/app/oracle/patchdepot/p6880880_112000_Linux-x86-64.zip
For Exadata Storage Servers and Database Servers on releases earlier than Exadata 11.2.3.2.1:
- Upgrade to Exadata 11.2.3.2.1, if required. See <patch 14522699> README.
- For Exadata Storage Servers: apply <patch 16547261>. See the patch README for instructions.
For database servers running Exadata release 11.2.3.2.1 using Unbreakable Enterprise Kernel (UEK) - when on kernel 2.6.32-400.11.1.el5uek or 2.6.32-400.21.1.el5uek install kernel and packages:
(root)# cd /u01/app/oracle/patchdepot
(root)# rpm -ivh kernel-uek-firmware-2.6.32-400.29.1.el5uek.noarch.rpm kernel-uek-2.6.32-400.29.1.el5uek.x86_64.rpm kernel-uek-devel-2.6.32-400.29.1.el5uek.x86_64.rpm ofa-2.6.32-400.29.1.el5uek-1.5.1-4.0.58.x86_64.rpm
Preparing... ########################################### [100%]
1:kernel-uek-firmware ########################################### [ 25%]
2:kernel-uek ########################################### [ 50%]
3:kernel-uek-devel ########################################### [ 75%]
4:ofa-2.6.32-400.29.1.el5########################################### [100%]
(root)# rpm -Uvh --force --nodeps kernel-uek-doc-2.6.32-400.29.1.el5uek.noarch.rpm kernel-uek-debuginfo-common-2.6.32-400.29.1.el5uek.x86_64.rpm kernel-uek-debuginfo-2.6.32-400.29.1.el5uek.x86_64.rpm
Preparing... ########################################### [100%]
1:kernel-uek-debuginfo-co########################################### [ 33%]
2:kernel-uek-doc ########################################### [ 67%]
3:kernel-uek-debuginfo ########################################### [100%]
After installation of kernel and packages, reboot the node to make the new kernel the active kernel.
NOTE: the packages can be installed either from local disk or via http
For 11.2.0.2 Grid Infrastructure and Database:
- Apply Patch for bug 12539000 ('Synchronization problem in the IPC state' ) to both Grid and Database home
- If applicable, apply patch for (unpublished) bug 14639430. This patch needs to be applied on the source (11.2.0.2 Grid Infrastructure)
- If applicable, apply patch for (unpublished) bug 13460353 - Only for those who will be creating 11.2.0.2 databases while running a 12.1.0.1 Grid infrastructure. This patch needs to be applied on the 11.2.0.2 database home
For 11.2.0.3 Grid Infrastructure and Database:
- If applicable, apply Exadata Bundle Patch or patch for 11.2.0.3 including fix for (unpublished) bug 14639430. This patch needs to be applied on the source (11.2.0.3 Grid Infrastructure)
- If applicable, apply Exadata Bundle Patch or patch for 11.2.0.3 including fix for (unpublished) bug 13460353 - Only for those who will be creating 11.2.0.3 databases while running a 12.1.0.1 Grid infrastructure. This patch needs to be applied on the 11.2.0.3 database home
NOTE: In order to create new 11.2.0.2 and 11.2.0.3 (bundle patches BP 11 and earlier) databases while running 12.1.0.1 Grid Infrastructure, a workaround is available. This work around eliminates the need to apply the fix for (unpublished) bug 13460353. The following command needs to be executed as root from the 12.1.0.1 Grid Infrastructure home before creating 11.2.0.2 or 11.2.0.3 (on releases earlier than BP11) databases:
(root)# crsctl modify type ora.database.type -attr "ATTRIBUTE=TYPE_VERSION,DEFAULT_VALUE=3.2"
(root)# crsctl modify type ora.service.type -attr "ATTRIBUTE=TYPE_VERSION,DEFAULT_VALUE=2.2"
- Data Guard - If there is a standby database, then run these commands on the standby database servers also.
- Follow the patch README's for patching instructions.
- For Solaris installations change to /tmp as working directory before applying the patch
Run Exachk or HealthCheck
For Exadata Database Machines V2 or later: run the latest release of Exachk to validate software, hardware, firmware, and configuration best practices. Resolve any issues identified by Exachk before proceeding with the upgrade. Since Exachk is not certified on V1, HealthCheck needs to be used to collect data regarding key software, hardware, and firmware releases. Review <Document 1070954.1> for details.
NOTE: It is recommended to run Exachk before and after the upgrade. When doing this, Exachk may find recommendations for the compatible settings for database, ASM, and diskgroup. At some point it is recommended to change compatible settings, but a conservative approach is advised. This is because changing compatible settings can result in not being able to downgrade/rollback later. It is therefore recommended to revisit compatible parameters some time after the upgrade has finished, when there is no chance for any downgrade and the system has been running stable for a longer period.
Validate Readiness for Oracle Clusterware upgrade using CVU and Exachck
Use the cluster verification utility (CVU) to validate readiness for the Oracle Clusterware upgrade. Review the Oracle Grid Infrastructure Installation Guide, chapter B 'How to Upgrade to Oracle Grid Infrastructure 12c Release 1' section 'Using CVU to Validate Readiness for Oracle Clusterware Upgrades'. Unzip the Clusterware installation zip file to the staging area. Before executing CVU as the owner of the Grid Infrastructure unset ORACLE_HOME, ORACLE_BASE and ORACLE_SID.
An example of running the pre-upgrade check, as follows:
(oracle)$ ./runcluvfy.sh stage -pre crsinst -upgrade \
-n node1,node2 \
-rolling -src_crshome /u01/app/11.2.0.3/grid \
-dest_crshome /u01/app/12.1.0.1/grid \
-dest_version 12.1.0.1.0 -fixup -verbose
For Linux:
- OS Kernel parameter checks may fail (unpublished bug 16777952) with errors like CRS-10051 and/or PRVG-1201. The same message may also be shown by OUI during Grid Infrastructure installation
- If the checks fail a possible cause can be the lack of read permissions on /etc/sysctl.conf for others. This can be solved by running 'chmod o+r /etc/sysctl.conf' as root on all compute nodes. Remember to undo this change when the upgrade is finished.
NOTE: On Solaris x86-64 systems the cluster verification utility (CVU) may fail and report error messages PRVG-1538, PRVG-1522 and PRVG-1521 due to unpublished bug 17346500. If this happens and Exachk didn't flag a same alert, then this message can be ignored.
Use also Exachck's 'upgrade module' to check for additional upgrade best practices and last minute patch requirements. See the Exachk documentation via <Document 1070954.1> for more information.
Early stage pre-upgrade check: analyze your databases to be upgraded with the Pre-Upgrade Information Tool
At this stage it's recommended to do a first run of the pre-upgrade information tool so there is time to anticipate on possible required steps before upgrading. The pre-upgrade tool is provided with the 12.1.0.1 software but since that is not installed at this moment the tool can be downloaded also via <document 884522.1> - How to Download and Run Oracle's Database Pre-Upgrade Utility. Run this tool to analyze the 11.2.0.2, 11.2.0.3 or 11.2.0.4 databases prior to the upgrade.
During the pre-upgrade steps, the pre-upgrade tool (preupgrd.sql) will warn to set the CLUSTER_DATABASE parameter to FALSE. However when using DBUA this is done automatically so the warning can be ignored.
Data Guard - If there is a standby database, then run the command on one of the nodes of the standby database cluster also.
Install and Upgrade Grid Infrastructure to 12.1.0.1
The instructions in this section will perform the Grid Infrastructure software installation and upgrade to 12.1.0.1. The Grid Infrastructure upgrade is performed in a RAC rolling fashion, this procedure does not require downtime.
Data Guard - If there is a standby database, then run these commands on the standby system separately to upgrade the standby system Grid Infrastructure. The standby Grid Infrastructure upgrade can be performed in parallel with the primary if desired. However, the Grid Infrastructure home always needs to be on later or equal level than the Database home. Therefore upgrading Grid Infrastructure home needs to be done before a database upgrade can be performed.
Here are the steps performed in this section.
- Validate hugepage configuration for new ASM SGA requirements
- Create a snapshot based backup of the database server partitions
- Create the new GI_HOME directory where 12.1.0.1 will be installed
- Prepare installation software
- Perform 12.1.0.1 Grid Infrastructure software installation and upgrade using OUI
- Apply latest Bundle Patch contents to Grid Infrastructure Home using 'opatch napply' (when available)
- Change Custom Scripts and Environment Variables to Reference the 12.1.0.1 Grid Home
Validate hugepage configuration for new ASM sga requirements
As part of the Grid Infrastructure upgrade, the ASM SGA will be increased to a value of 2G. The new setting will require additional hugepages from the operating system. Make sure at least 1300 hugepages are configured for ASM to start during the upgrade process with the new value. If less than 1300 hugepages are configured the upgrade will fail. The extra hugepages should be added to the number of hugepages required for the existing databases to run. If not enough hugepages are configured to hold both ASM and databases (database configured to use hugepages only) the rootupgrade.sh script may not finish successfully. See <document 361468.1> and <document 401749.1> for more details on hugepages.
NOTE: Existing 11.2 ASM instances report the number of hugepages allocated in the alert.log. Substract this value from the 1300 to find out how much additional hugepages need to be added to the existing operating system configuration.
Create a snapshot based backup of the database server partitions
Even while the Grid Infrastructure is being upgraded out-of-place, it is recommended to create a filesystem backup of the database server before proceeding.
Steps for creating a snapshot based backup of the database server partitions are documented in chapter 7 of the Oracle Database Machine Owners guide, "Recovering a Linux-Based Database Server Using the Most-Recent Backup". Existing custom backup procedures can of also be used as an alternative.
Create the new Grid Infrastructure (GI_HOME) directory where 12.1.0.1 will be installed
In this document the new Grid Infrastructure home /u01/app/12.1.0.1/grid is used in all examples. It is recommended that the new Grid Infrastructure home NOT be located under /opt/oracle. If it is, then review <Document 1281913.1>. To create the new Grid Infrastructure home, run the following commands from the first database server. You will need to substitute your Grid Infrastructure owner username and Oracle inventory group name in place of oracle and oinstall, respectively.
(root)# dcli -g ~/dbs_group -l root mkdir -p /u01/app/12.1.0.1/grid/
(root)# dcli -g ~/dbs_group -l root chown oracle /u01/app/12.1.0.1/grid
(root)# dcli -g ~/dbs_group -l root chgrp -R oinstall /u01/app/12.1.0.1
Prepare installation software
Unzip all 12.1.0.1 software. Run the following command on the database server where the software is staged. An example for the Grid Infrastructure follows, but the same needs to be done for the database software.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_grid_1of2.zip \
-d /u01/app/oracle/patchdepot
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_grid_2of2.zip \
-d /u01/app/oracle/patchdepot
Change css misscount setting back to default before upgrading
Before proceeding the upgrade css miscount setting should be set to the default (of 30 seconds). The following command needs to be executed as oracle from the 11.2 Grid Infrastructure home before proceeding the upgrade:
(oracle)$ crsctl unset css misscount
Perform the 12.1.0.1 Grid Infrastructure software installation and upgrade using OUI
Perform these instructions as the Grid Infrastructure software owner (which is oracle in this document) to install the 12.1.0.1 Grid Infrastructure software and upgrade Oracle Clusterware and ASM from 11.2.0.2 or 11.2.0.3 to 12.1.0.1. The upgrade begins with Oracle Clusterware and ASM running and is performed in a rolling fashion. The upgrade process manages stopping and starting Oracle Clusterware and ASM and making the new 12.1.0.1 Grid Infrastructure Home the active Grid Infrastructure Home. For systems with a standby database in place this step can be performed either before, at the same time or after installation of Grid Infrastructure on the primary system.
To downgrade Oracle Clusterware back to the previous release: See "B.14 Downgrading Oracle Clusterware After an Upgrade" in the Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux
The OUI installation log is located at /u01/app/oraInventory/logs.
For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.
Set the environment then run the installer, as follows:
(oracle)$ unset ORACLE_HOME ORACLE_BASE ORACLE_SID
(oracle)$ export DISPLAY=<your_xserver>:0
(oracle)$ cd /u01/app/oracle/patchdepot/grid
(oracle)$ ./runInstaller
Starting Oracle Universal Installer...
Perform the exact steps as described below on the installer screens***:
- Step 1 of 10: On "Software Updates" screen select "Skip software updates", and then click Next.
- Step 2 of 10: On "Installation Options" screen, select "Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management", and then click Next.
- Step 3 of 10: On "Select Product Languages" screen, select "Languages", and then click Next.
- Step 4 of 16: On "Node Selection" screen, verify all database nodes are shown and selected, and then click Next.
- Use the installer to configure ssh equiv when required but be sure to exit the installer after ssh equiv is setup.
- Restart the installer and skip ssh equiv setup in the next try. This is due to unpublished bug Bug 16657039 which is fixed in 12.1.0.2
- There is no "Step 5 of 16"
- Step 6 of 16: On "Grid Infrastructure Management Repository Option" screen, choose "No" and do not configure the Grid Infrastructure Management Repository
- Step 7 of 13: On "Privileged Operating System Groups" screen, verify group names and change if desired, and then click Next.
- Step 8 of 13: On "Specify Installation Location" screen, choose "Oracle Base" and change the software location. Recommended software location: /u01/app/12.1.0.1/grid
- Step 9 of 13: On "Root script execution" screen, do not check the box. Keep root execution in your own control because a relink of the oracle binary is required first
- Step 10 of 13: On "Prerequisite Checks" screen, resolve any failed checks or warnings before continuing.
- For Solaris x86-64: Error messages PRVG-1538, PRVG-1522 and PRVG-1521 may be seen due to unpublished bug 17346500. If this happens and Exachk didn't flag a same alert, then this message can be ignored
- Step 11 of 13: On "Summary" screen, verify the plan and click 'Install' to start the installation (recommended to save a response file for the next time)
Before executing the last two steps in the installation wizard additional steps are required:
- Update OPatch and applying a Bundle Patch on top of the 12.1.0.1 Grid Infrastructure Installation
- Relinking the 12.1.0.1 Grid Infrastructure oracle binary with RDS
*** For upgrades on Solaris x86_64 additional patches and workarounds need to be applied before and after running rootupgrade.sh:
- Before rootupgrade.sh: Apply patch for bug 17065496 (HAIP STARTUP) in the new GI HOME
- Before rootupgrade.sh: Apply latest PSU using opatch napply. Note - if application of the PSU fails due to conflicts, apply the PSU after the upgrade
- Before rootupgrade.sh: Backup shrept.lst in the new <oracle_grid>/network/admin directory. Secure the file in /home/oracle
- Run rootupgrade.sh
- After rootupgrade.sh: Restore $HOME/shrept.lst in the new <oracle_grid>/network/admin directory
- After rootupgrade.sh: After successfully running rootupgrade, de-install patch for bug 17065496 (HAIP STARTUP) from the new GI HOME
- After rootupgrade.sh: If application of the PSU failed earlier, then apply the PSU using opatch napply at this moment.
- After rootupgrade.sh: Relink the new GI HOME with rds option (see below)
Install OPatch 12.1
Now the 12.1.0.1 Grid Home directories are available the 12.1 version of OPatch can also be installed/updated:
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/12.1.0.1/grid \
/u01/app/oracle/patchdepot/p6880880_121010_Linux-x86-64.zip
Apply recommended patches to the Grid Infrastructure before running rootupgrade.sh
The following sections describes what patches must be applied before running rootupgrade.sh
Patch 17065496
For Solaris x86-64: follow "Case 5" from the README: "Patching a Software Only GI Home Installation or Before the GI Home Is Configured" to apply the patch for bug 17065496.
The following warnings can be ignored:
ld: warning: output object option (-o, --output) appears more than once, first setting taken
ld: warning: symbol '_init' not found, but .init section exists - possible link-edit without using the compiler driver
ld: warning: symbol '_fini' not found, but .fini section exists - possible link-edit without using the compiler driver
NOTE: Because of unpublished bug 17381303 - 121011GIPSU: OPATCH NAPPLY PATCH 17027533 FAILED WITH ACTIVE FILES - it's currently advised to apply the GIPSU after the upgrade process. This is after the Oracle Universal Installer exits. Follow the GIPSU README instructions to apply the patch.
THIS IS CURRENTLY COMMENTED OUT BECAUSE OF 17381303 - APPLY THE GIPSU AFTER THE UPGRADE
Apply the latest Oracle Grid Infrastructure System Patch (GIPSU) to the Grid Infrastructure home
For Oracle Linux and Solaris x86-64 database servers, apply need to apply the latest GIPSU before running rootupgrade.sh. In this specific example we use patch 17272829 which was available at the time of writing. It's always recommended to use the latest GIPSU
Follow "Case 5: Patching a Software Only GI Home Installation or Before the GI Home Is Configured" in <document 1591616.1> for the folllowing step to be executed on each node:
<GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <PATH_TO_PATCH_DIRECTORY>/17272829/17077442
<GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <PATH_TO_PATCH_DIRECTORY>/17272829/17303297
<GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <PATH_TO_PATCH_DIRECTORY>/17272829/17027533
NOTE: No other steps other than updating OPatch to the latest release and running 'opatch napply' as specified by the README should be done at this stage. Don't execute the commands rootcrs.pl -unlock and rootcrs.pl -patch for example.
NOTE: See <Document 1410202.1> for more information on how to apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is executed. Note that <Document 1410202.1> talks about Patch Set Updates (PSU) while Exadata only has bundle patches (BP)
Relink oracle binary with RDS before running the rootupgrade script
For Linux: as owner of the Grid Infrastructure Home on all nodes execute the steps as follows before running rootupgrade.sh:
(oracle)$ dcli -l oracle -g ~/dbs_group ORACLE_HOME=/u01/app/12.1.0.1/grid \
make -C /u01/app/12.1.0.1/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle
For Solaris: as owner of the Grid Infrastructure Home on all nodes execute the steps as follows before running rootupgrade.sh:
(oracle)$ dcli -l oracle -g ~/dbs_group ORACLE_HOME=/u01/app/12.1.0.1/grid \
make /u01/app/12.1.0.1/grid/rdbms/lib -f \
/u01/app/12.1.0.1/grid/rdbms/lib/ins_rdbms.mk ipc_rds ioracle
Verify the oracle binary is relinked with the proper option and the following command returns 'rds':
(oracle)$ dcli -l oracle -g ~/dbs_group /u01/app/12.1.0.1/grid/bin/skgxpinfo
Verify the checksum of the oracle binary is the same across all the compute nodes on the cluster as follows:
(oracle)$ dcli -l oracle -g ~/dbs_group md5sum /u01/app/12.1.0.1/grid/bin/oracle
If the md5sum of the binaries do not match find out if the relink has failed. It's possible the oracle binaries are relinked with RDS linked in it while still having a different md5sum. This is no problem.
If available apply 12.1.0.1 Bundle Patch Overlay Patches to the Grid Infrastructure Home as Specified in Document 888828.1
Review <Document 888828.1> to identify and apply patches that must be installed on top of the Bundle Patch just installed.
Apply Customer-specific 12.1.0.1 One-Off Patches to the Grid Infrastructure Home
If there are one-offs that need to be applied to the environment and they are approved by Oracle Support, then apply them now.
Change SGA memory settings for ASM
As sysasm, adjust sga_max_size and sga_target to a value of 2G. The values will become active with a next start of the ASM instances.
SYS@+ASM1> alter system set sga_max_size = 2G scope=spfile sid='*';
SYS@+ASM1> alter system set sga_target = 2G scope=spfile sid='*';
Verify values for memory_target, memory_max_target and use_large_pages
Values should be as follows:
SYS@+ASM1> select sid, name, value from v$spparameter
where name in
('memory_target','memory_max_target','use_large_pages');
SID NAME VALUE
------ ------------------------- -----------------------------------
* use_large_pages TRUE
* memory_target 0
* memory_max_target
When the values are not as expected, change them as follows:
SYS@+ASM1> alter system set memory_target=0 sid='*' scope=spfile;
SYS@+ASM1> alter system set memory_max_target=0 sid='*' scope=spfile /* required workaround */;
SYS@+ASM1> alter system reset memory_max_target sid='*' scope=spfile;
SYS@+ASM1> alter system set use_large_pages=true sid='*' scope=spfile /* 11.2.0.2 and later (Linux only) */;
NOTE: increasing the SGA size will cause more hugepages to be used by ASM on a next instance startup. At this point it is assumed at least 1300 hugepages are configured for ASM to start properly during the upgrade process. Hugepages required for databases remain the same and need to be added to the value of 1300.
Stop and disable OC4J in the 11.2 Grid Infrastructure home before running rootupgrade.sh
(oracle)$ srvctl stop oc4j
(oracle)$ srvctl disable oc4j
Stop non-default VIPs (for example vips on backup lan)
In order to prevent against the issue in (unpublished bug) 18095460, it's required to stop all VIPs other than the standard VIPS before running rootupgrade. Typically this are VIPs configured on backup or IB networks (Exalogic). Perform the action as root from the existing Grid Infrastructure home on the first node for all custom added VIPs, example:
(root)# crsctl stop res ora.<vipname>.vip -f
Checks to do before executing rootupgrade.sh on each database server
Before running rootupgrade.sh verify no active rebalance is running
Query gv$asm_operation to verify no active rebalance is running. A rebalance is running when the result of the following query is not equal to zero :
SYS@+ASM1> select count(*) from gv$asm_operation;
COUNT(*)
----------
0
Execute rootupgrade.sh on each database server, as indicated in the Execute Configuration scripts screen the script must be executed on the local node first. For Solaris installations change to /tmp as working directory before executing the scripts. The rootupgrade script shuts down the earlier release Grid Infrastructure installation, updates configuration details, and starts the new Grid Infrastructure installation.
When rootupgrade fails it is recommended to check the following output first to get more details :
- output of rootupgrade script itself
- ASM alert.log
- /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/rootcrs_<node_name>.log
After rootupgrade.sh completes successfully on the local node, you can run the script in parallel on other nodes except for the last node. When the script has completed successfully on all the nodes except the last node, run the script on the last node. Do not run rootupgrade.sh on the last node until the script has run successfully on all other nodes.
- First node rootupgrade.sh will complete with this output
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2013/06/20 02:23:03 CLSRSC-363: User ignored prerequisites during installation
ASM upgrade has started on first node.
OLR initialization - successful
2013/06/20 02:25:32 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2013/06/20 02:28:39 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2013/06/20 02:29:50 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
- Last node rootupgrade.sh will complete with this output
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2013/06/20 03:37:09 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
2013/06/20 03:39:34 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2013/06/20 03:42:38 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Oracle Clusterware operating version was successfully set to 12.1.0.1.0
2013/06/20 04:02:47 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Continuation of the 12.1.0.1 installation wizard:
- Step 12 of 13: On "Execute configuration scripts" screen, when done press "OK"
- Step 13 of 13 On: "Finish", click "close"
set _asm_resyncCkpt to 0
Immediately after running rootupgrade.sh each ASM instance (also on the standby site) requires setting the _asm_resyncCkpt to 0 due to (unpublished) bug 17273253. Run the following command as sysasm :
SYS@+ASM1> alter system set "_asm_resyncCkpt"=0 sid='*' scope=both;
Perform an extra check on the status of the Grid Infrastructure post upgrade by executing the following command from one of the compute nodes:
(root)# /u01/app/12.1.0.1/grid/bin/crsctl check cluster -all
The above command should show an online status for Cluster Ready Services, Cluster Synchronization Services and Event Manager on all nodes in the cluster, example as follows:
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
When the cluster is not showing an online status for any of the components on any of the nodes, the issue needs to be researched before continuing. For troubleshooting see the MOS notes mentioned in the reference section of this note.
Change Custom Scripts and Environment Variables to Reference the 12.1.0.1 Grid Home
Customized administration, login scripts, static instance registrations in listener.ora files and CRS resources that reference the previous Grid Infrastructure Home should be updated to refer to new Grid infrastructure home '/u01/app/12.1.0.1/grid'.
For DBFS configurations is it recommended to review the chapter "Steps to Perform If Grid Home or Database Home Changes" in <Document 1054431.1> - "Configuring DBFS on Oracle Database Machine" as the shell script used to mount the DBFS filesystem may be located in the original Grid Infrastructure home and needs to be relocated. The following steps are performed to update the location of the CRS resource script to mount dbfs:
Modify the dbfs_mount cluster resource
Update the mount-dbfs.sh script and the ACTION_SCRIPT attribute of the dbfs-mount cluster resource to refer to the new location of mount-dbfs.sh. See 'Post-Upgrade Steps'
Install Database 12.1.0.1 Software
The steps in this section will perform the Database software installation of 12.1.0.1 into a new directory.
This section only installs Database 12.1.0.1 software into a new directory. It does not affect running databases hence all the steps below can be done without downtime.
Data Guard - If there is a separate system running a standby database and that system already has Grid Infrastructure upgraded to 12.1.0.1, then run these steps on the standby system separately to install the Database 12.1.0.1 software. The steps in this section can be performed in any of the following ways:
- Install Database 12.1.0.1 software on the primary system first then the standby system.
- Install Database 12.1.0.1 software on the standby system first then the primary system.
- Install Database 12.1.0.1 software on both the primary and standby systems simultaneously.
Here are the steps performed in this section.
- Prepare Installation Software
- Perform 12.1.0.1 Database Software Installation with OUI
- Relink Oracle Executable in Database Home with RDS
- Update OPatch in New Grid Home and New Database Home on All Database Servers
- When available: Install Latest 12.1.0.1 Bundle Patch available for your operating system - Do Not Perform Post-Installation Steps
- When available: Apply 12.1.0.1 Bundle Patch Overlay Patches as Specified in Document 888828.1
- When available: Apply Customer-specific 12.1.0.1 One-Off Patches
Prepare Installation Software
Unzip the 12.1.0.1 database software. Run the following command on the database server where the software is staged.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_database_1of2.zip -d /u01/app/oracle/patchdepot
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_database_2of2.zip -d /u01/app/oracle/patchdepot
Create the new Oracle DB Home directory on all database server nodes
(oracle)$ dcli -l oracle -g ~/dbs_group mkdir -p /u01/app/oracle/product/12.1.0.1/dbhome_1
Perform 12.1.0.1 Database Software Installation with the Oracle Universal Installer (OUI)
The OUI installation log is located at /u01/app/oraInventory/logs.
Set the environment then run the installer, as follows:
Note: For OUI installations or execution of important scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.
(oracle)$ unset ORACLE_HOME ORACLE_BASE ORACLE_SID
(oracle)$ export DISPLAY=<your_xserver>:0
(oracle)$ cd /u01/app/oracle/patchdepot/database
(oracle)$ ./runInstaller
Perform the exact actions as described below on the installer screens:
- On "Configure Security Updates" screen, fill in required fields, and then click Next.
- On "Software Updates" screen, select 'Skip software updates', and then click Next.
- On "Select Installation Option" screen, select 'Install database software only', and then click Next.
- On "Grid Installation Option", select "Oracle Real Application Clusters database installation" and click Next
- On "Node Selection" screen, verify all database servers in your cluster are present in the list and are selected, and then click Next.
- On "Select Product Languages" screen, select 'Languages', and then click Next.
- On "Select Database Edition", select 'Enterprise Edition', click Select Options to choose components to install, and then click Next.
- On "Installation Location", enter /u01/app/oracle as Oracle base and /u01/app/oracle/product/12.1.0.1/dbhome_1 as the Software Location for the Database home, and then click Next.
- On "Operating System Groups" screen, verify group names, and then click Next.
- On "Prerequisite Checks" screen, verify there are no failed checks or warnings
- On "Summary" screen, verify information presented about installation, and then click Install.
- On "Execute Configuration scripts screen, execute root.sh on each database server as instructed, and then click OK
- for Solaris installations change to /tmp as working directory before executing the script
- On "Finish screen", click Close.
Relink Oracle Executable in Database Home with RDS
Run the following command as the oracle home software owner from the first database server. This command will perform the relink on all database servers.
Relink for Linux, as follows:
(oracle)$ dcli -l oracle -g ~/dbs_group ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1 \
make -C /u01/app/oracle/product/12.1.0.1/dbhome_1/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle
Relink for Solaris, as follows:
(oracle)$ dcli -l oracle -g ~/dbs_group ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1 \
make /u01/app/oracle/product/12.1.0.1/dbhome_1/rdbms/lib -f \
/u01/app/oracle/product/12.1.0.1/dbhome_1/rdbms/lib/ins_rdbms.mk ipc_rds ioracle
Verify the oracle binary is relinked with the proper option and the following command returns 'rds':
(oracle)$ dcli -l oracle -g ~/dbs_group /u01/app/oracle/product/12.1.0.1/dbhome_1/bin/skgxpinfo
Verify the checksum of the oracle binary across all the compute nodes on the cluster as follows:
(oracle)$ dcli -l oracle -g ~/dbs_group md5sum /u01/app/oracle/product/12.1.0.1/dbhome_1/bin/oracle
If the md5sum of the binaries do not match find out if the relink has failed. It's possible the oracle binaries are relinked with RDS linked in it while still having a different md5sum. This is no problem.
Update/Install OPatch in New Database Home on All Database Servers
Run both of these commands on one database server and update OPatch in the Database Home as follows:
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq \
-d /u01/app/oracle/product/12.1.0.1/dbhome_1 \
/u01/app/oracle/patchdepot/p6880880_121010_Linux-x86-64.zip
Install the latest 12.1.0.1 GI PSU (which includes the DB PSU) to the Database Home when available - Do Not Perform Post-Installation Steps
The example commands describe how to apply a PSU. At the time of writing 12.1.0.1 PSU <patch 17272829> was used an example. Review <Document 888828.1> for the latest release information and most recent patches. Applying the latest PSU requires the latest OPatch to be installed. Below is an example, always consult the specific patch README for current instructions.
Stage the patch
Unzip the patch on all database servers, as follows:
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/oracle/patchdepot \
/u01/app/oracle/patchdepot/p17272829_121010_Linux-x86-64.zip
Create OCM response file if required
If you do not have the OCM response file, then run the following command on each database server.
(oracle)$ cd /u01/app/oracle/patchdepot
(oracle)$ /u01/app/oracle/product/12.1.0.1/dbhome_1/OPatch/ocm/bin/emocmrsp
Patch 12.1.0.1 database home
Run the following command as root user only on the local node. Note there are no databases running out of this home yet. For Solaris installations change to /tmp as working directory before applying the patch. It is recommended to run this command from is a new session to make sure no settings from previous steps remain. Example as follows:
(root)# export PATH=$PATH:/u01/app/oracle/product/12.1.0.1/dbhome_1/OPatch
(root)# opatchauto apply <PATH_TO_PATCH_DIRECTORY> -oh <Comma separated Oracle home paths> -ocmrf <ocm response file>
Skip patch post-installation steps
Do not perform patch post-installation. Patch post-installation steps will be run after the database is upgraded.
When available: Apply 12.1.0.1 Patch Overlay Patches to the Database Home as Specified in Document 888828.1
Review <Document 888828.1> to identify and apply patches that must be installed on top of the new Grid Infrastructure with the current Bundle Patch. If there are SQL command that must be run against the database as part of the patch application, postpone running the SQL commands until after the database is upgraded.
Apply Customer-specific 12.1.0.1 One-Off Patches
If there are one-offs that need to be applied to the environment and they are approved by Oracle Support, then apply them now. If there are SQL statements that must be run against the database as part of the patch application, postpone running the SQL commands until after the database is upgraded.
Upgrade Database to 12.1.0.1
The commands in this section will perform the database upgrade to 12.1.0.1.
Data Guard - Unless otherwise indicated, run these steps only on the primary database.
Here are the steps performed in this section.
- Backing up the database and creating a Guaranteed Restore Point
- Analyze the Database to Upgrade with the Pre-Upgrade Information Tool
- Data Guard only - Synchronize Standby and Switch to 12.1.0.1
- Data Guard only - Disable Fast-Start Failover and Data Guard Broker
- Before starting the database upgrade assistant stop and disable all services with PRECONNECT as option for 'TAF Policy specification'
- Upgrade the Database with Database Upgrade Assistant (DBUA)
- Review and perform steps in Oracle Upgrade Guide, Chapter 4 'Post-Upgrade Tasks for Oracle Database'
- Change Custom Scripts and Environment Variables to Reference the 12.1.0.1 Database Home
- Add Underscore Initialization Parameters Back
- When available: run 12.1.0.1 Bundle Patch Post-Installation Steps
- Data Guard only - Enable Fast-Start Failover and Data Guard Broker
The database will be inaccessible to users and applications during the upgrade (DBUA) steps. Course estimate of actual application downtime is 30-90 minutes but required downtime may depend on factors such as the amount of PL/SQL that needs recompilation. Note that it is not a requirement all databases are upgraded to the latest release. It is possible to have multiple releases of the Oracle Database Homes running on the same system. The benefit of having multiple Oracle Homes is that multiple releases of different databases can run. The disadvantage is that more planned maintenance is required in terms of patching. Older database releases may lapse out of the regular patching lifecycle policy in time. Having multiple Oracle homes on the same node requires more disk space.
Backing up the database and creating a Guaranteed Restore Point
If not done already, before proceeding with the upgrade a full backup of the database should be made. Additional to this full back backup it is recommended to create a Guaranteed Restore Point (GRP). As long as the database COMPATIBLE parameter is not changed after creating the GRP, the database can be flashed back after a failed upgrade. In order to create a GRP the database must be in Archive Redo Log mode. The GRP can be created when the database is in OPEN mode as follows:
SYS@PRIM1> CREATE RESTORE POINT grpt_bf_upgr GUARANTEE FLASHBACK DATABASE;
After creating the GRP, verify status as follows:
SYS@PRIM1> SELECT * FROM V$RESTORE_POINT where name = 'GRPT_BF_UPGR';
NOTE: After a successful upgrade the GRP should be deleted.
Analyze the Database to Upgrade with the Pre-Upgrade Information Tool
The pre-upgrade information tool is provided with the 12.1.0.1 software. Run this tool to analyze the 11.2.0.2, 11.2.0.3 or 11.2.0.4 databases prior to upgrade.
Run Pre-Upgrade Information Tool
At this point the database is still running with 11.2.0.2, 11.2.0.3 or 11.2.0.4 software. Connect to the database with your environment set to 11.2.0.2, 11.2.0.3 or 11.2.0.4 and run the pre-upgrade information tool that is located in the 12.1.0.1 database home, as follows:
SYS@PRIM1> spool preupgrade_info.log
SYS@PRIM1> @/u01/app/oracle/product/12.1.0.1/dbhome_1/rdbms/admin/preupgrd.sql
During the pre-upgrade steps, the pre-upgrade tool (preupgrd.sql) will warn to set the CLUSTER_DATABASE parameter to FALSE. However when using DBUA this is done automatically so the warning can be ignored.
Handle obsolete and underscore parameters
Obsolete and underscore parameters will be identified by the pre-upgrade information tool. During the upgrade, DBUA will remove the obsolete and underscore parameters from the primary database initialization parameter file. Some underscore parameters that DBUA removes will be added back in later after DBUA completes the upgrade.
Data Guard only - DBUA will not affect parameters set on the standby, hence obsolete parameters and some underscore parameters must be removed manually if set. Typical values that need to be unset before starting the upgrade are as follows:
SYS@STBY1> alter system reset cell_partition_large_extents scope=spfile;
SYS@STBY1> alter system reset "_backup_ksfq_bufsz" scope=spfile;
SYS@STBY1> alter system reset "_backup_ksfq_bufcnt" scope=spfile;
SYS@STBY1> alter system reset "_lm_rcvr_hang_allow_time" scope=spfile;
SYS@STBY1> alter system reset "_kill_diagnostics_timeout" scope=spfile;
SYS@STBY1> alter system reset "_arch_comp_dbg_scan" scope=spfile;
Review pre-upgrade information tool output
Review the remaining output of the pre-upgrade information tool. Take action on areas identified in the output.
Data Guard only - Synchronize Standby and Change the Standby Database to use the new 12.1.0.1 Database Home
Perform these steps only if there is a physical standby database associated with the database being upgraded.
As indicated in the prerequisites section above, the following must be true:
- The standby database is running in real-time apply mode.
- The value of the LOG_ARCHIVE_DEST_n database initialization parameter on the primary database that corresponds to the standby database must contain the DB_UNIQUE_NAME attribute, and the value of that attribute must match the DB_UNIQUE_NAME of the standby database.
Flush all redo generated on the primary and disable the broker
To ensure all redo generated by the primary database running 11.2.0.2, 11.2.0.3 or 11.2.0.4 is applied to the standby database running 11.2.0.2, 11.2.0.3 or 11.2.0.4, all redo must be flushed from the primary to the standby.
First, verify the standby database is running recovery in real-time apply. Run the following query connected to the standby database. If this query returns no rows, then real-time apply is not running. Example as follows:
SYS@STBY1> select dest_name from v$archive_dest_status
where recovery_mode = 'MANAGED REAL TIME APPLY';
DEST_NAME
------------------------------
LOG_ARCHIVE_DEST_2
Shutdown the primary database and restart just one instance in mount mode, as follows:
(oracle)$ srvctl stop database -d PRIM -o immediate
(oracle)$ srvctl start instance -d PRIM -n dm01db01 -o mount
Verify the primary database has specified db_unique_name of the standby database in the log_archive_dest_n parameter setting, as follows:
SYS@PRIM1> select value from v$parameter where name = 'log_archive_dest_2';
VALUE
-------------------------------------------------------------------------------
service="gih_stby" LGWR SYNC AFFIRM delay=0 optional compression=disable max_fa
ilure=0 max_connections=1 reopen=300 db_unique_name="STBY" net_timeout=30 valid
_for=(all_logfiles,primary_role)
Data Guard only - Disable Fast-Start Failover and Data Guard Broker
Disable Data Guard broker if it is configured as Data Guard broker is incompatible with running from different releases. If fast-start failover is configured, it must be disabled before broker configuration is disabled, as follows.
DGMGRL> disable fast_start failover;
DGMGRL> disable configuration;
Also, disable the init.ora setting dg_broker_start in both primary and standby as follows:
SYS@PRIM1> alter system set dg_broker_start = false;
SYS@STBY1> alter system set dg_broker_start = false;
Flush all redo to the standby database using the following command. Standby database db_unique_name in this example is 'STBY'. Monitor the alert.log of the standby database to verify for the 'End-of-Redo' message. Example as follows:
SYS@PRIM1> alter system flush redo to 'STBY';
Shutdown the primary database.
Wait until the 'End-of-Redo' on the standby is confirmed, as follows:
End-Of-REDO archived log file has not been recovered
Incomplete recovery SCN:0:1371457 archive SCN:0:1391461
Physical Standby did not apply all the redo from the primary.
Tue May 20 14:01:49 2013
Media Recovery Log +RECO/prim/archivelog/2011_11_22/thread_2_seq_39.1090.767883831
Identified End-Of-Redo (move redo) for thread 2 sequence 39 at SCN 0x0.153b65
Resetting standby activation ID 338172592 (0x14281ab0)
Media Recovery Waiting for thread 2 sequence 40
Tue May 20 14:01:50 2013
Standby switchover readiness check: Checking whether recoveryapplied all redo..
Physical Standby applied all the redo from the primary.
Standby switchover readiness check: Checking whether recoveryapplied all redo..
Physical Standby applied all the redo from the primary.
Then shutdown the primary database, as follows:
(oracle)$ srvctl stop database -d PRIM -o immediate
Shutdown the standby database and restart it in an 12.1.0.1 database home
Perform the following steps on the standby database server:
Shutdown the standby database, as follows:
(oracle)$ srvctl stop database -d stby
Copy required files from 11.2.0.2, 11.2.0.3 or 11.2.0.4 home to the 12.1.0.1 database home.
The following example shows the copying of the password file, but also other files like init.ora files may be required to copy:
(oracle)$ dcli -l oracle -g ~/dbs_group \
'cp /u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwstby* \
/u01/app/oracle/product/12.1.0.1/dbhome_1/dbs'
Edit standby environment files
- Edit the standby database entry in /etc/oratab (Linux) or /var/opt/oracle/oratab (Solaris) to point to the new 12.1.0.1 home.
- On both the primary and standby database servers, ensure the tnsnames.ora entries are available to the database after it has been upgraded. If using the default location for tnsnames.ora, $ORACLE_HOME/network/admin, then copy tnsnames.ora from the old home to the new home.
- If using Data Guard Broker to manage the configuration, then modify the broker required SID_LIST listener.ora entry on all nodes to point to the new ORACLE_HOME. For example
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME=PRIM_dgmgrl)
(SID_NAME = PRIM1)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0.1/dbhome_1)
)
)
- After this reload the listener on all nodes, as follows:
(oracle)$ lsnrctl reload listener
Update the OCR configuration data for the standby database by running the 'srvctl upgrade' command from the new database home as follows.
(oracle)$ srvctl upgrade database -d stby -o /u01/app/oracle/product/12.1.0.1/dbhome_1
Start the standby, as follows :
(oracle)$ srvctl start database -d stby
Start all primary instances in restricted mode
DBUA requires all RAC instances to be running from the current database home before starting the upgrade. To prevent an application from accidentally connecting to the primary database and performing work causing the standby to fall behind, startup the primary database in restricted mode, as follows:
(oracle)$ srvctl start database -d PRIM -o restrict
Upgrade the Database with Database Upgrade Assistant (DBUA)
NOTE: Before starting the Database Upgrade Assistant it is required change the preference for 'concurrent statistics gathering' on the current release if the current setting is not set to 'FALSE'.
First, while still on the 11.2. release, obtain the current setting:
SQL> SELECT dbms_stats.get_prefs('CONCURRENT') from dual;
When 'concurrent statistics gathering' is not not set to 'FALSE', change the value to 'FALSE' before the upgrade. ('FALSE' is mapped to 'OFF' in 12.1):
BEGIN
DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','FALSE');
END;
/
In both 11.2 as in 12.1 concurrency is disabled by default for both manual and automatic statistics gathering. If the database requires changing this value back to the original setting, do this after the upgrade.
Reference bug: 18406728 (unpublished)
NOTE:
Before starting the database upgrade assistant all databases that will be upgraded having services configured with
PRECONNECT as option for '
TAF Policy specification' should have these services stopped and disabled. Once a database upgrade is completed, services can be enabled and brought online. Not disabling services having the PRECONNECT option for 'TAF Policy specification' will cause an upgrade to fail.
For each database being upgraded use the srvctl command to determine if a '
TAF policy specification' with '
PRECONNECT' is defined. Example as follows:
(oracle)$ srvctl config service -d <db_unique_name> | grep -i preconnect | wc -l
For each database being upgraded the output of the above command should be 0. When the output of the above command is not equal to 0, find the specific service(s) for which
PRECONNECT is defined. Example as follows:
(oracle)$ srvctl config service -d <db_unique_name> -s <service_name>
Those services found need to be stopped and disabled before proceeding the upgrade. Example as follows:
(oracle)$ srvctl stop service -d <db_unique_name> -s "<service_name_list>"
(oracle)$ srvctl disable service -d <db_unique_name> -s "<service_name_list>"
Reference bug: 16539215
Run DBUA to upgrade the primary database. All database instances of the database you are upgrading must be brought up or DBUA may hang. If there is a standby database, the primary database should be left running in restricted mode, as performed in the previous step.
Oracle recommends removing the value for the init.ora parameter 'listener_networks' before starting DBUA. The value will be restored after running DBUA. Be sure to obtain the original value before removing, as follows:
SYS@PRIM1> set lines 200
SYS@PRIM1> select name, value from v$parameter where name='listener_networks';
If the value for parameter listener_networks was set, then the value needs to be removed as follows:
SYS@PRIM1> alter system set listener_networks='' sid='*' scope=both;
Run DBUA from the new 12.1.0.1 ORACLE_HOME as follows:
(oracle)$ /u01/app/oracle/product/12.1.0.1/dbhome_1/bin/dbua
Perform these mandatory actions on the DBUA screens:
- On "Select Operation" screen, select "Upgrade Oracle Database" and then click Next
- On "Select Database" screen, select the source Oracle home and then select the database to be upgraded, and then click Next.
- On "Prerequisite Checks" screen, be sure all validation checks are passed. If required make appropriate changes and re-run validation, then click Next
- On "Upgrade Options" screen
- Set Upgrade Parallelism to 4
- When not already done earlier :
- Select "recompile invalid objects during post upgrade" with the suggested value for parallelism
- Select "Upgrade Timezone Data"
- Select "Gather Statistics Before Upgrade"
- Select "Set User Tablespaces Read Only During Upgrade", then click Next.
- Use suggested file locations for "Diagnostic Destination" and "Audit File Destination"
- On "Management Options" screen, select the Enterprise Manager option applicable to your environment and fill in the details when required, then click Next.
- On "Recovery Options" screen, select an option to recover the database in case of upgrade problems
- when backups or Guaranteed Restore Points were created earlier skip this step by selecting "I have my own backup and restore strategy"
- On Summary screen, verify information presented about the database upgrade, and then click Finish.
- On Progress screen, when the upgrade is complete, click OK.
- XDB may see an XDB component upgrade error" - ORA-01917: user or role anonymous does not exist due to unpublished bug 17036501. This message can be ignored.
- On Upgrade Results screen, review the upgrade result and investigate logfiles and any failures, and then click Close.
The database upgrade to 12.1.0.1 is complete. There are additional actions to perform to complete configuration.
Review and perform steps in Oracle Upgrade Guide, Chapter 4 'Post-Upgrade Tasks for Oracle Database'
The Oracle Upgrade Guide documents required and recommended tasks to perform after upgrading to 12.1.0.1. Since the database was upgraded from 11.2.0.2, 11.2.0.3 or 11.2.0.4, some tasks do not apply. The following list is the minimum set of tasks that should be reviewed for your environment.
- Update Environment Variables
- Upgrade the Recovery Catalog
- Upgrade the Time Zone File Version when not already done earlier by DBUA.
- For upgrades done by DBUA tnsnames.ora entries for that particular database will be updated in the tnsnames.ora in the new home. However entries not related to a database upgrade or entries related to standby database will not be updated as part of any DBUA action. The synchronization of these entries needs to be done manually. Ifile directives used in tnsnames.ora, for example in the grid home, need to be updated to point to the new database home.
Change Custom Scripts and Environment Variables to Reference the 12.1.0.1 Database Home
The primary database is upgraded and is now running from the 12.1.0.1 database home. Customized administration and login scripts that reference database home ORACLE_HOME should be updated to refer to /u01/app/oracle/product/12.1.0.1/dbhome_1.
Underscore Initialization Parameters
During the upgrade DBUA removes obsolete and underscore initialization parameters. One new underscore parameter needs to be double checked for and added if not set.
Run the following command verify this parameter:
SYS@PRIM1> select distinct(value) from gv$parameter where name = '_file_size_increase_increment';
VALUE
--------------------------------------------------------------------------------
2143289344
If the value for "_file_size_increase_increment" is missing or not set to the expected value of 2143289344, set it to the right value. Example as follows:
SYS@PRIM1> alter system set "_file_size_increase_increment"=2143289344 sid='*' scope=both;
For X2-8/X3-8 only: verify NUMA settings:
SYS@PRIM1> select distinct(value) from gv$parameter where name = '_enable_NUMA_support';
When not set to TRUE for X2-8 and X3-8, then set the value:
SYS@PRIM1> alter system set "_enable_NUMA_support"=TRUE sid='*' scope=spfile;
Values for "_kill_diagnostics_timeout" and "_lm_rcvr_hang_allow_time" should not exist after the upgrade, run the following command to verify this:
SYS@PRIM1> select distinct(name), value
SYS@PRIM1> from gv$parameter
SYS@PRIM1> where name in ('_kill_diagnostics_timeout','_lm_rcvr_hang_allow_time');
NAME VALUE
--------------------------------------------------------------------------------
_lm_rcvr_hang_allow_time 140
_kill_diagnostics_timeout 140
The values need to be removed if they still exist. Example as follows:
SYS@PRIM1> alter system reset "_kill_diagnostics_timeout" sid='*' scope=spfile;
SYS@PRIM1> alter system reset "_lm_rcvr_hang_allow_time" sid='*' scope=spfile;
The value for the init.ora parameter 'listener_networks' removed before the upgrade needs to be restored as follows:
SYS@PRIM1> alter system set listener_networks='<original value>' sid='*' scope=both;
Data Guard only - DBUA will not affect parameters set on the standby, hence previously set underscore parameters will remain in place, however, since the values were reset in a previous step they need to be restored now. For standard installations, only the following additional underscore parameters need to be added back for a standby database.
SYS@STBY1> alter system set "_file_size_increase_increment"=2143289344 sid='*' scope=both;
For any parameter setup in the spfile only, be sure to restart the databases to make the settings effective.
When required: run PSU Post-Installation Steps
If a PSU installation was performed before the database was upgraded then post-installation steps may be required. See the PSU README for instructions (if any).
NOTE: be sure to check all objects are valid after running the Post-Installation Steps. If invalid objects are found run utlrp until no rows are returned
Data Guard only - Enable Fast-Start Failover and Data Guard Broker
Update the static listener entry in the listener.ora file on all nodes where a standby instance can run so that it reflects the new ORACLE_HOME used, as follows:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = STBY_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0.1/dbhome_1)
(SID_NAME = STBY1)
)
)
If Data Guard broker and fast-start failover were disabled in a previous step, then re-enable them in SQL-Plus and dgmgrl, as follows:
SYS@PRIM1> alter system set dg_broker_start = true sid='*';
SYS@STBY1> alter system set dg_broker_start = true sid='*';
DGMGRL> enable configuration
DGMGRL> enable fast_start failover
Post-upgrade Steps
Here are the steps performed in this section.
- Remove Guaranteed Restore Point if still exists
- DBFS only - Perform DBFS Required Updates
- Run Exachk or HealthCheck
- Re-configure the Enterprise Manager Cloud Control 12c targets in the EM Console to use the new Oracle Homes
- Deinstall the 11.2.0.2, 11.2.0.3 or 11.2.0.4 Database and Grid Homes
- Re-configure RMAN Media Management Library
- Restore settings for concurrent statistics gathering
Remove Guaranteed Restore Point
If the upgrade has been successful and a Guaranteed Restore Point (GRP) was created, it should be removed now as follows:
SYS@PRIM1> DROP RESTORE POINT GRPT_BF_UPGR;
DBFS only - Perform DBFS Required Updates
When the DBFS database is upgraded to 12.1.0.1 the following additional actions are required:
Obtain latest mount-dbfs.sh script from Document 1054431.1
Download the latest mount-dbfs.sh script is attached to <Document 1054431.1>, place it a (new) directory and update the CRS resource:
(oracle)$ dcli -l oracle -g ~/dbs_group mkdir -p /home/oracle/dbfs/scripts
(oracle)$ dcli -l oracle -g ~/dbs_group -f /u01/app/oracle/patchdepot/mount-dbfs.sh -d /home/oracle/dbfs/scripts
(oracle)$ crsctl modify resource dbfs_mount -attr "ACTION_SCRIPT=/home/oracle/dbfs/scripts/mount-dbfs.sh"
Edit mount-dbfs.sh script and Oracle Net files for the new 12.1.0.1 environment
Using the variable settings from the original mount-dbfs.sh script, edit the variable settings in the new mount-dbfs.sh script to match your environment. The setting for variable ORACLE_HOME must be changed to match the 12.1.0.1 ORACLE_HOME (/u01/app/oracle/product/12.1.0.1/dbhome_1).
Edit tnsnames.ora used for DBFS to change the directory referenced for the parameters PROGRAM and ORACLE_HOME to the new 12.1.0.1 database home.
fsdb.local =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL=BEQ)
(PROGRAM=/u01/app/oracle/product/12.1.0.1/dbhome_1/bin/oracle)
(ARGV0=oraclefsdb1)
(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
(ENVS='ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1,ORACLE_SID=fsdb1')
)
(CONNECT_DATA=(SID=fsdb1))
)
If the location of Oracle Net files changed as a result of the upgrade, then change the setting of TNS_ADMIN in shell scripts and login files.
If using wallet-based authentication, recreate the symbolic link to /sbin/mount.dbfs. If you are using the Oracle Wallet to store the DBFS password, then run the following commands:
(root)# dcli -l root -g ~/dbs_group ln -sf \
/u01/app/oracle/product/12.1.0.1/dbhome_1/bin/dbfs_client /sbin/mount.dbfs
(root)# dcli -l root -g ~/dbs_group ln -sf \
/u01/app/oracle/product/12.1.0.1/dbhome_1/lib/libnnz11.so /usr/local/lib/libnnz11.so
(root)# dcli -l root -g ~/dbs_group ln -sf \
/u01/app/oracle/product/12.1.0.1/dbhome_1/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
(root)# dcli -l root -g ~/dbs_group ldconfig
Run Exachk or HealthCheck
For V2 or later: Run Exachk again to validate software, hardware, firmware, and configuration best practice validation after the upgrade.
Since Exachk is not certified on V1 hardware HealthCheck needs to be used to collect data regarding key software, hardware and firmware releases. Review <Document 1070954.1> for details.
Optional: Deinstall the 11.2.0.2, 11.2.0.3 or 11.2.0.4 Database and Grid Homes
After the upgrade is complete and the database and application have been validated and in use for some time, the 11.2.0.2, 11.2.0.3 or 11.2.0.4 database and grid homes can be removed using the deinstall tool. Run these commands on the first database server. The deinstall tool will perform the deinstallation on all database servers. Refer to
Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux for additional details of the deinstall tool.
Before running the deinstall tool to remove the old database and grid homes, run deinstall with the -checkonly option to verify the actions it will perform. Ensure the following:
- There are no databases configured to use the home.
- The home is not a configured Grid Infrastructure home.
- ASM is not detected in the Oracle Home.
To deinstall Database and Grid infrastructure, the example steps are as follows:
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p10098816_112020_LINUX_7of7.zip -d /u01/app/oracle/patchdepot
(oracle)$ cd /u01/app/oracle/patchdepot/deinstall
(oracle)$ ./deinstall -checkonly -home /u01/app/oracle/product/11.2.0/dbhome_1/
(oracle)$ ./deinstall -home /u01/app/oracle/product/11.2.0/dbhome_1/
(root)# dcli -l root -g ~/dbs_group chmod -R 755 /u01/app/11.2.0/grid
(root)# dcli -l root -g ~/dbs_group chown -R oracle /u01/app/11.2.0/grid
(root)# dcli -l root -g ~/dbs_group chown oracle /u01/app/11.2.0
(oracle)$ cd /u01/app/oracle/patchdepot/deinstall
(oracle)$ ./deinstall -checkonly -home /u01/app/11.2.0/grid/
(oracle)$ ./deinstall -home /u01/app/11.2.0/grid/
When not immediately deinstalling the previous Grid Infrastructure, rename the old Grid Home directory on all nodes such that operators cannot mistakenly execute crsctl commands from the wrong Grid Infrstructure Home.
Grid Infrastructure upgrades from 11.2.0.4 should first stop and relocate TFA to another location outside of the old Grid Infrastructure Home. See <document 1513912.1> for more details.
Re-configure RMAN Media Management Library
Database installations that use an RMAN Media Management Library (MML) may require re-configuration of the Oracle Database Home after the upgrade. Most often recreating a symbolic link to vendor provided MML is sufficient.
For specific details see the MML documentation.
Restore settings for concurrent statistics gathering
When the preference for concurrent statistics gathering was changed to FALSE earlier in the process (before DBUA was started), then restore this setting now when required. Note that the 12.1 DEFAULT of 'OFF'.
Troubleshooting
The approach where a new software release is installed out of place (in a new home) will help against failed installations. Any type of installation problem should not impact availability. Failed installations can easily be rolled back and restarted. The rootupgrade script that needs to run after installing a new Grid Infrastructure is the critical part of the upgrade. When this fails normal problem solving applies and the notes mentioned below maybe helpful:
- <Document 1050908.1> - Troubleshoot Grid Infrastructure Startup Issues
Revision History
Date
| Change |
May 25 2017
|
|
Jun 23 2014
|
|
Apr 1 2014
|
|
Mar 17 2014
|
|
Jan 23 2014
|
|
Jan 17 2014
|
|
Dec 18 2013
|
|
Oct 29 2013
|
-
Grid Infrastructure Patch Set Update 12.1.0.1.1 (17272829), which includes DB PSU 12.1.0.1.1 has become available. This document now has instructions to apply this patch to both Grid Infrastructure (GI) and Database (DB) homes. To GI homes: before running rootupgrade.sh. To DB homes: after installation of the new home, before upgrading the database. When later PSU's will become available the same approach will apply.
|
Oct 23 2013
|
- Added Solaris x86-64 specfic steps including patch 17065496
|
Oct 14 2013
|
- Added instruction to restore _enable_NUMA_support=TRUE for X2-8/X3-8 nodes
|
Oct 9 2013
|
- Re-configure the Enterprise Manager Cloud Control 12c targets in the EM Console to use the new Oracle Homes instead of using DBCA
|
Sep 3 2013
|
- kernel 2.6.32-400.11.1.el5uek no longer supported for upgrades.
|
Sep 1 2013
|
- Setting "_asm_resyncCkpt"=0 right after running rootupgrade.sh in the ASM instance due to (unpublished) bug 17273253
|
Aug 7 2013
|
- Recompile invalid objects during post upgrade with suggested parallelism - not 23 (this value changes between V2/X2/X3)
|
Aug 6 2013
|
- Added note on how to calculate additional hugepages to be added.
|
Aug 5 2013
|
- Added instruction to OUI steps
|
Jul 11 2013
|
- Added workaround for 16777952 - chmod o+r /etc/sysctl.conf
|
July 8 2013
|
- Document released externally
|
July 2 2013
|
- valid kernels from now on are: kernel-uek-2.6.32-400.11.1.el5uek, kernel-uek-2.6.32-400.29.1.el5uek or non-uek 2.6.18-308.24.1.0.1.el5
|
June 27 2013
|
- Document released internally
|
June 24 2013
|
|
Community Discussions
The window below is a live discussion of this article (not a screenshot). We encourage you to join the discussion by clicking the "Reply" link below for the entry you would like to provide feedback on. If you have questions or implementation issues with the information in the article above, please share that below.
References
<NOTE:1360798.1> - How to Complete Grid Infrastructure Configuration Assistant(Plug-in) if OUI is not Available
Attachments
This solution has no attachment