# | Applies to | Issue | Fix or Workaround | Date Updated |
DB_29
|
GIPSU 12.2.0.1.171003/171017 part of QFSPD Oct 2017 |
Node eviction stemming from pfiles being run against GI processes causing thier threads to get delayed
|
SuperCluster : Node eviction after apply GIPSU 12.2.0.1.171003/171017 part of QFSPD Oct 2017 <Document 2176610.1>
|
12/19/2017 |
DB_28
|
SuperCluster Grid Infrastructure provided with JULY 2017 QFSDP 12.1.0.2.170718 |
RAC nodes failing to start due to voting disk corruption following patching
|
Apply OCW <Patch 26512962> to the Grid Infrastructure Home.
|
09/13/2017 |
DB_27
|
SuperCluster systems running 12.1.0.2.161018 or 12.1.0.2.160719 OCWPSU |
Generic bug impacting SuperCluster systems Frequent RAC Node eviction that appears to be network heartbeat related. If detected in time A pstack <ocssd.bin pid> will show several threads in the function clsdadr_bucket_syncb.
This Patch should be considered mandatory.
If scheduled to patch to OCT 2016 QFSDP it is strongly advised to go to JAN 2017 to save time in your patching window as item SOL_11_20 is mandatory as well for both JUL 2016 and OCT 2016 QFSDP patch levels
|
Permanent fix in JAN 2017 QFSDP.
<Note 2227319.1> SuperCluster Critical issue: DB_27 :Mandatory patch: Bug 25233268 Leading to Frequent Node Evictions with JUL and OCT 2016 QFSDP
|
|
DB_26
|
All Database all versions regardless of storage used or if deployed in DB or application domains. |
Use of database parameter use_large_pages=false is completely unsupported on SuperCluster systems. Using it can cause unnecessary performance implications at the DB or OS kernel level, especially for larger SGA sizes. |
Set use_large_pages=true or unset it completely, as true is the default, and restart your databases. Solaris operating system, by default is optimized to take advantage of large pages. |
10/24/2016 |
DB_25
|
11.2.0.4 through 12.1.0.2 |
RAC Node evicting and not rejoining the cluster |
SuperCluster : RAC : CRS not able to rejoin the cluster following node eviction or reboot due to CSSD. <Document 2166436.1> |
7/28/2016 |
DB_24
|
12.1.0.2 |
ASM XDMG process exiting in a way where it can hang zones and or logical domains |
SuperCluster: 12.1.0.2 ASM XDMG process causes hang in PR_P_LOCK on ORA-15311: process terminated due to fenced I/O <Document 2166445.1> |
7/28//2016 |
DB_23
|
CRS all versions |
Storage network improperly listed as a public interface in OIFCFG |
SuperCluster: storage network IB interface listed as public in oifcfg getif could result in improper nodeapp VIP failover. <Document 2150668.1> |
6/16/2016 |
DB_22
|
DB and ASM 11.2.0.3 BP 25 and below |
Bug 20116094, the ASM/kfod does not discover the Griddisks on SPARC systems runnig 11.2.0.3 BP 25 and below against 12.1.2.1.0 and above storage cells. This issue is typically found during patching if the cells are patched prior to the DB and GI homes. |
The bug has been fixed in 11.2.0.3 BP 26 onwards
Apply 11.2.0.3 BP 26 or one-off patch for Bug.20116094
|
09/1/2015 |
DB_21
|
ASM 12.1.0.2 |
Bug 21281532 - ASM rebalance interrupted with errors ORA-600 [kfdAtbUpdate_11_02] and ORA-600 [kfdAtUnlock00]. |
See Document 2031709.1 for additional details. |
08/15/2015 |
DB_20
|
ASM 12.1.0.2 |
Bug 20904530 - During disk resync ORA-600 [kfdsBlk_verCb] reported due to corruption in ASM staleness registry.
|
See Document 2028222.1 for additional details. |
08/15/2015 |
DB_19
|
11.2.0.4.x running against 12.1.2.x Exadata storage cells |
After restoring an RMAN backup in this combination and running a subsequent backup or running DBV data block corruption is detected. trace file could show Bad header found during validation.
|
<Patch 20952966> for the 11.2.0.4 DB home(s) or redo the initial restore with the workaround of setting _cell_fast_file_restore=FALSE in the database SPfile. The patch is the preferred approach and it should be considered mandatory for all 11.2.0.4 Databases accessing Exadata Sotrage cell version 12.1.2.1.x |
5/9/2014 |
DB_18
|
11.2.0.4 and 12.1.0.2
|
11.2.0.4 Bug 10194190 - Solaris: Process spin and/or ASM and DB crash if RAC instance up for > 248 days <Document 10194190.8> 12.1.0.2 Bug 22901797 - LMHB (OSPID: 29160): TERMINATING THE INSTANCE DUE TO ERROR 29770
|
Fixed in 11.2.0.4.9 and above. Other documentation makes it appear this is fixed in 12.1.0.2.5 but it is not it is fixed in April 2016 12.1.0.2.DBBP:160419
|
6/7/2016 |
DB_17
|
12.1.0.2.4 (JAN 2015 level) |
Bug 20591915 has introduced a regression in DBBP 12.1.0.2.4 (Jan2015) for Solaris SPARC Exadata machines. Because of this regression XMDG process may crash on SuperCluster causing asm core files under the GI home.
Also can be rediscovered with
Bug 20591915 - Grid disk asmmodestatus query hangs when a grid disk is inactive.
This issue causes CellCLI command "list griddisk attributes asmmodestatus" to hang, which subsequently causes rolling cell patching to hang when upgrading from Exadata 12.1.1.1.1, or earlier, to any later Exadata version when Grid Infrastructure is version 12.1.0.2.4 (DBBP4) or 12.1.0.2.5 (DBBP5).
|
<Patch 20591915> to 12.1.0.2.4
Please note this needs to be applied in the ASM home (GI)
This is also fixed in 12.1.0.2.6 PSU and above.
|
|
DB_16
|
11.2.0.3 through 12.1.0.2 |
Critical Performance enhancements for the database on SPARC
19308965 RAW HAZARDS SEEN WITH RDBMS CODE ON SOLARIS T5 13846337 QESASIMPLEMULTICOLKEYCOMPARE NOT OPTMIZED FOR SOLARIS SPARC64 12660972 CHECKSUM CODE NEEDS REVISTING IN LIGHT OF NEW PROCESSORS
|
11.2.0.3.21 or later plus <Patch 20097385> and <Patch 12660972>
11.2.0.4.15 and BP below plus <Patch 19839616> and <Patch 12660972>
11.2.0.4.16 and above plus <Patch 12660972>
12.1.0.2.6 and below plus <Patch
|
6/23/2015 |
DB_15
|
11.2.0.3 and 11.2.0.4 ASM instances |
<Bug 17997507> - xdmg process exits without closing skgxp context when ora-15311 is seen.
This can actually show its self as EXAVM database zones getting stuck in the shutdown state in conjunction with a command like ps -ef hanging in the global zone.
|
Fixed in 11.2.0.3.24 and 11.2.0.4.7. If at a BP previous search on MOS for patch 17997507 and SPARC if one does not exist for your BP level contact support
|
|
DB_14
|
11.2.0.3 Grid Infrastructure |
<Bug 13798847> - add multilple ports to scan_listener fails
|
Apply the latest SuperCluster 11.2.0.3.9 GI PSU Merge which will be documented in the Supported Versions note for your hardware type. Latest is MLR <Bug 19459715>
|
|
DB_13
|
11.2.0.3 to 12.1.0.1 Grid Infrastructure |
<Bug 17722664> - clsa crash during client connection cleanup for large number of changing connect. fixed in 12.1.0.2
|
11.2.0.3
Apply the latest SuperCluster 11.2.0.3.9 GI PSU Merge which will be documented in the Supported Versions note for your hardware type. Latest is MLR <Bug 19459715>
11.2.0.4 contact support for a merge in with <Bug 16429265> and your current level of GI PSU
12.1.0.1 contact support for a merge in with your current level of GI PSU
Fixed in 12.1.0.2
|
|
DB_12
|
Systems with one of the following grid infrastructure home versions:
11.2.0.4 BP1-BP5 11.2.0.3 BP22 |
Same as item DB_24 in the Exadata critical issues note
|
Fixed in BP 23 and above however you should get the fix for DB_11-14 va the latest SuperCluster 11.2.0.3.9 GI PSU Merge which will be documented in the Supported Versions note for your hardware type. Latest is MLR <Bug 19459715>
|
|
DB_11
|
11.2.0.3 and 11.2.0.4 Grid Infrastructure |
<Bug 17443419> - chm (ora.crf) can't be online in solaris local zone (solaris sparc64) fixed in 12.1
|
Apply the latest SuperCluster 11.2.0.3.9 GI PSU Merge which will be documented in the Supported Versions note for your hardware type. Latest is MLR <Bug 19459715>
|
|
DB_10
|
11.2.0.3 to 11.2.0.4 GI / ASM upgrade |
<Bug 17837626 >
HAIP failures in the
orarootagent_root.log
CRS-2674: Start of 'ora.cluster_interconnect.haip' on 'hostname' failed CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'hostname' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'hostname' succeeded CRS-4000: Command Start failed, or completed with errors.
|
Workaround
In the 11.2.0.4 home
cd <GRID_HOME>/crs/install# vi s_crsconfig_lib.pm
Look for the funcion s_is_sun_ipmp and make the following change at the end of that function
/is_sun_ipmp
# made it all the way out without finding any IPMP private #return FALSE; return TRUE;
|
|
DB_9
|
RMAN incremental backups created with one of the following database patch sets:
- 12.1.0.1 GIPSU1 or earlier
- 11.2.0.3 BP21 or earlier
- any 11.2.0.2
- any 11.2.0.1
|
Bug 16057129 - Exadata cell optimized incremental backup can miss some blocks if a database file grows larger while the file is being backed up. A missed block can lead to stuck recovery and ORA-600[3020] errors if the incremental backup is used for media recovery.
See Document 16057129.8 for details.
Existing RMAN incremental backups taken without the bug fix in place should be considered invalid and not usable for database recovery, incrementally updating level 0 backups, or standby database creation.
RMAN full backups, level 0 backups that are not part of an incrementally updated backup strategy, and database recovery using archived redo logs are not affected by this issue.
|
Step 1.Set the following parameter in all databases that use Exadata storage:
_disable_cell_optimized_backups=TRUE
SQL> alter system set "_disable_cell_optimized_backups"=TRUE scope=both;
The parameter specified above may be removed after the fix for bug 16057129 is installed by upgrade or by applying an interim patch. See below for fix availability.
Step 2. Create new RMAN backups. Minimally a new RMAN cumulative incremental backup must be taken. In addition, level 0 backups that are part of an incrementally updated backup strategy must be recreated.
Fix availability Fixed in 12.1.0.1 GIPSU2 (planned January 2014) Fixed in 11.2.0.4.0 Fixed in 11.2.0.3 BP22 (planned January 2014) Fixed in Patch 16057129 for 11.2.0.3 BP21 Fixed in Patch 17599908 for 11.2.0.2 BP22
|
|
DB_8
|
All DB versions |
For RAC Databases in LDoms or Zones with more than one IB bond interface , onecommand is not setting all interfaces in oifcfg nor in cluster_interconnects parameter of ASM and DB spfiles
|
You can check this in ASM and DB instances with a show parameter cluster_interconnect and at the RAC level oifcfg getif. If you have multiple interfaces available add them into oifcfg and into cluster_interconnects in each ASM and DB instance. Make sure you assign the right IP addresses in cluster interconnects to the right sids based on hsot location of instance.
|
|
DB_6
|
11.2.0.3.x Grid Infrastructure |
<Bug 13604285> ora.net1.network keeps failing over.
Key indicator "Networkagent: check link false" in orarootagent log combined with the network resource constantly failing over around the cluster nodes.
|
All current exadata bundle patches through BP21 require this fix.
If you have existing one offs on your Grid Infrastructure you will have to open an SR to Support for a merge.
|
|
DB_4
|
|
<Bug 12865682>- byte swapping causing some of the extra overhead. This can lead to a performance degradation in hash join plans on big endian platforms.
|
Download and apply <Patch 12865682> for Solaris SPARC to all of your 11.2.0.3.x database homes even if they are not using the Exadata storage. This is now considered a mandatory patch for SPARC SuperCluster. This does not need to be backported to a specific Exadata BP level as it does not conflict with the bundle patch. This is now fixed as part of 11.2.0.4, 12.1.x and 11.2.0.3.21 Bundle patch, This means you do not need this patch if 11.2.0.3.21 or beyond.
|
5/9/2015 |
DB_2
|
11.2.0.3.x Grid Infrastructure and DB |
Default thread priority of RT (Real Time) for LMS can cause blocking of kernel threads to the CPU. Also LGWR being at TS (Thread Select) can lead to excessive log writer write times leading to general database performance issues.
|
The fix for this is often called the Critical Threads fix or the FX-60 fix.
There are multiple ways to correct this. One is to apply a one off patch to all Database and Grid Infrastructure homes. The one off patch can be downloaded using <Patch 12951619>.
The prefered method for systems without databases in zones is to be patched to OCT 2013 QFSDP and ensure you have installed and are running the ssctuner service from that exafmily version, ssctuner@0.5.11,5.11-1.5.0.5.
For databases running in exavm zones you need to be at the version of ssctuner provided in the JAN 2014 QFSDP, ssctuner@0.5.11,5.11-1.5.9.237, and above and ensure your zones are running with the TS scheduling class see <Document 1618396.1> for more information on how to verify and rectify the scheduling class and how to update ssctuner out of band with the QFSDP.
Please also review and comply with SuperCluster - ssctuner is not adjusting the scheduling class of all of lms , lgwr and vktm processes to FX-60 Document 1628298.1. This is an additional step that has to be done under the supervision of an Oracle badged employee.
|
|
DB_1
|
All Database versions |
CR 7172851 System hang, threads blocked in DISM code
|
Dynamic Intimate Shared Memory (DISM) is not supported for use on SPARC SuperCluster Solaris environments in instances other than the ASM instance <Document 1468297.1> |
|