![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||||||||||||||||
Solution Type Predictive Self-Healing Sure Solution 2348803.1 : Oracle Database Quarterly Full Stack Download Patch for SuperCluster Jan 2018 Known Issues
In this Document
Applies to:Oracle SuperCluster M7 HardwareOracle SuperCluster Specific Software Oracle SuperCluster M6-32 Hardware SPARC SuperCluster T4-4 Oracle SuperCluster T5-8 Full Rack Information in this document applies to any platform. PurposeThis document lists the known issues for Oracle Database Quarterly Full Stack Download Patch for SuperCluster October 2017. These known issues are in addition to the issues listed in the README for Quarterly Full Stack Download Patch for SuperCluster. Details
Oracle Database Quarterly Full Stack Download Patch for SuperCluster (Jan 2018) Known Issues
Platform: Solaris.SPARC64 My Oracle Support Released: February 8th, 2018 This document lists the known issues and other notes for Oracle Database Quarterly Full Stack Download Patch for SuperCluster (JAN 2018) Patches. This document includes the following sections: 1 Notes and Known Issues1.1 SuperCluster Pre-Installation NotesThe following are general SuperCluster QFSDP pre-installation notes: 1.1.1 - Installation of patch 22738454 (APR 2016 Quarterly Full Stack Download Patch for Oracle SuperCluster), or a later QFSDP, is a prerequisite to installing this patch. 1.1.2 - Pay particular attention to the installation order prescribed in sub-section 4 ”Required Installation Order for QFSDP Components”, which is part of ”Section 2. Installing Quarterly Full Stack (JAN 2018 - 11.2, 12.2 and 12.1)”.
Please refer to the SCMU README.solaris file for instructions on how to add a downloaded Custom Incorporation version to the Solaris IPS repository.
1.1.6 - Quorum Disk Manager functionality has been introduced with the Jan 2017 QFSDP. For further information, please refer to the SuperCluster documentation on http://docs.oracle.com/cd/E58626_01/index.html and search for "quorum".
This document includes instructions on how to verify the current version of the SuperCluster Virtual Assistant on the system.
1.2 SuperCluster Known Issues for this releaseThe following are known issues for this QFSDP release: 1.2.1 - If the JAN QFSDP is being applied to a SuperCluster that uses the SuperCluster Virtual Assistant to provision and manage IO Domains, and the current version of the SuperCluster Virtual Assistant is not currently on v2.4 (delivered with the JUL 2017 & OCT 2017 QFSDPs) then it is critical that the following document be reviewed in detail, prior to applying the JAN 2018 QFSDP: This document includes instructions on how to verify the current version of the SuperCluster Virtual Assistant on the system.
1.2.2 - If the Oct 2016 QFSDP was ever installed on the system and the system configuration contains IO domains, please follow the steps outlined in MOS <Note 2223954.1> "Remediation steps for 25386898 for SuperClusters with the Oct 2016 QFSDP applied" before installing this QFSDP.
1.2.3 - Install picl in any Database non global zones using 'pkg install picl' (if not already installed) prior to running opatch to update the Database as opatch now uses jre-8 and installing picl improves performance.
1.2.4 - Solaris 11.3 SRU11 and later include LDoms 3.4.0.1 which by default sets verfied boot_policy=warn. Due to Bug 21463646, the warning messages such as the following may be seen in /var/adm/messages on boot: Aug 3 09:02:47 etc7m7-exa2-n1 unix: [ID 711624 kern.notice] WARNING:
Signature verification of module /usr/kernel/drv/sparcv9/oracleoks failed Aug 3 09:02:47 etc7m7-exa2-n1 unix: [ID 711624 kern.notice] WARNING: Signature verification of module /usr/kernel/drv/sparcv9/oracleadvm failed Aug 3 09:02:48 etc7m7-exa2-n1 unix: [ID 711624 kern.notice] WARNING: Signature verification of module /usr/kernel/drv/sparcv9/oracleacfs failed These warnings can be ignored. Do not set verified boot_policy=fail. If desired, the warning messages can be suppressed as follows, but it is not necessary to do so: On the control domain:
1.2.5 - Solaris 11.3 SRU11 and later includes LDoms 3.4.0.1 which by default sets verified boot_policy=warn. On Solaris 10 guest domains, "Unsupported bootblk image" messages may be seen (Bug 24825049). These can be ignored. On the control domain:
1.2.6 - The following messages which may be seen on the console can be ignored (bug 24923073): <date> <system name> ip: net<number>: DL_BIND_REQ failed: DL_BADADDR
<date> <system name> ip: net<number>: DL_UNBIND_REQ failed: DL_OUTSTATE 1.2.7 - Oracle SuperCluster IO Domains may fail to boot after upgrading to Solaris 11.3 SRU 11 or later if the system has already been configured with more than 127 mac addresses per PF. Please refer to MOS <Note 2235476.1> for details and solution.
1.2.8 - Some customers temporarily modify passwords during maintenance windows, for example to enable Platinum Patching, and set them back when patching is complete. If this is the case, it is recommended to temporarily disable monitoring of the Exadata storage cells in the Oracle Engineered System Hardware Monitoring OESHM) utility or update the passwords. This is necessary as otherwise OESHM attempts to login to the storage cells will fail and, by default, the cells will lock out for 10 minutes after repeated failed login attempts. To temporarily disable storage cell monitoring in OESHM, login to https://<IP of master root domain>:8001, go to the Storage Server tab, select each storage cell in turn, and click the "Stop Monitoring" button and confirm. The cell Hardware Health and Configuration status will change to "In Maintenance". After reverting to the original passwords at the end of the maintenance window, re-enable monitoring of the cells by repeating the process, clicking the "Start Monitoring" button.
1.2.9 - When running OEDA, bug 24945782 "STEP 11 : INITIALIZE CLUSTER SOFTWARE HANGING INTERMITTENTLY" may occasionally be encountered. The workaround is simply to run the step again.
1.2.10 - Mitigation issues can occur with ZFSSA after updating ssctuner and osc-domcreate to the Jul 2017 QFSDP. For details and workaround, please see MOS <Note 2297850.1.>
1.2.11 - In certain cases issues have been observed updating to IB switch 2.2.x-y firmware. Please refer to MOS <Note 2280595.1> "Infiniband switch running 2.2.x will not boot...", <Note 2301054.1> "How to prevent an Infiniband Switch being rendered un-bootable..." and <Note 2202721.1> "Infiniband Gateway Switch Stays In Pre-boot Environment During Upgrade/Reboot" for details and how to determine if a IB switch might be vulnerable to potential IB switch upgrade issues.
1.2.12 - If running Solaris Cluster 3.3_u2, then it is you will need to download and install the "Oracle ZFS Storage Appliance Network File System Plug-in for Oracle Solaris Cluster, v1.0.5" with the JAN 2018 QFSDP. Version 1.0.5 of the ZFS SA NFS Plugin will also require Java 7 Update 151 (or later Java 7 Updates) to be installed. For more information on this issue and details of how to install the required components please follow the instructions in MOS <Note 2326715.1>.
1.2.13 - Bug 24429841 (Missing pkg:/system/picl as a dependency from runtime/java/jre-* pkgs) has been seen during OEDA deployment with the JAN 2018 QFSDP. If you encounter this issue, the workaround is to manually install picl packages in DB Zones & DB IO Domains.
1.2.14 - Per Bug 27299669 "18.1.2.0.0 & 12.2.1.4.4-[PMGR_FUNCT_CHECK_PING][161] COMMAND NOT FOUND: PING6" the following message may be seen in the patchmgr.log: "[patchmgr_functions_check_ping][161] Command not found: ping6"
These messages are safe to ignore and this issue will be fixed in the 18.2.0.0.0 version of the cell software.
1.2.15 - As a result of Bug 27376820 "BACKPORT FIX FOR 27000656 FROM OPATCH_MAIN TO OPATCH_12.2.0.1.12 CODELINE", patching 12.2.0.1 DB has been seen to take in excess of 5 hours per RAC node.
1.2.16 - Per Bug 27282890 "If ZFSSA cluster heads are out of sync, false dedup messages may occur", ssctuner may warn users if the ZFS SA heads are in a non-clustered configuration. When applying the JAN 2018 QFSDP, there will be some period of time while the ZFS SA controller head nodes will be out of sync (when one head is upgraded and other is not yet). Dec 17 14:43:10 sc_nodename1 ssctuner: [ID 702911 local0.error] critical: deduplication is set on ZFSSA head node(s). Please disable.
Then you need to check ssctuner SMF log on the reporting node (sc_nodename1 in this example) to see if this is due to the heads being out of sync. If you see this after the dedup error message, there is no dedup issue but your storage heads are out of sync (and if the message occurs again in 24 hours someone should address that issue) : [ Dec 17 14:43:10 critical: deduplication is set on ZFSSA head node(s). Please disable. ]
[ Dec 17 14:43:10 Dedup is enabled This controller is running a different software version from its cluster peer. Configuration changes made to either controller will not be propagated to its peer while in this state, and may be undone when the two software versions are synchronized. Please see the appliance documentation for more information. ]
1.2.17 - Bug 27134424 "ip interconnect inaccessible after SP reset" has been seen intermittently while testing the JAN 2018 QFSDP. # ldm ls-spconfig
The requested operation could not be performed because the communication channel between the LDoms Manager and the system controller is down. The ILOM interconnect may be disabled or down (see ilomconfig(1M)). # To resolve this issue, please first try resetting the ILOM interconnect from the control domain(s) exhibiting the issue: # ilomconfig disable interconnect
# ilomconfig enable interconnect Failing that, issuing a 'reset /SP' on the affected node's ILOM has resolved the problem.
1.2.18 - Per Bug 27376916 "Seeing drive offline messages and offlined disk in format for unpresent luns", the messages similar to the following have been seen on the console of the domain running the SuperCluster Virtual Assistant when "Thawing" IO Domains: Feb 4 15:04:20 node0101 scsi: WARNING: /scsi_vhci/ssd@g600144f09a945d5600005a76447a0007 (ssd12):
Feb 4 15:04:20 node0101 drive offline In addition, running the 'format' command in that domain will result in output similar to the following: root@node0101:~# format 2. c0t600144F0DB7D639400005A77A5B70003d0 <drive not available> These messages can be ignored and will no longer appear on the next reboot of the domain running the SuperCluster Virtual Assistant.
1.2.19 - Bug 27275834 "iscsi: WARNING: connection/login failed/service or target is not operational" has been seen on domains that are running Solaris Cluster and using a quorum disk from the ZFS SA, when updating the ZFS SA as part of the JAN 2018 QFSDP process. Dec 15 11:10:32 node0101 iscsi: WARNING: iscsi connection(117) login failed - iSCSI service or target is not currently operational. (0x03/0x01)
Dec 15 11:10:32 node0101 iscsi: WARNING: iscsi connection(117) login failed - iSCSI service or target is not currently operational. (0x03/0x01) While these messages are safe to ignore, an intermittent Solaris Cluster Global Zone hang has been experienced at the same time. In case of domain hang, the workaround is to reboot the affected domain.
1.2.20 - Bug 27470202 "osc-setcoremem: Resource group commands are not supported warning during runtime" # /opt/oracle.supercluster/bin/osc-setcoremem -type core -res 8/256:8/256:8/256:8/256
Resource group commands are not supported by the software stack running on this system. Consult the product documentation for more details.
The reason for this is the underlying ldm command "ldm ls-rsrc-group" command, which is not supported on the T4 platform is being erroneously called: # ldm ls-rsrc-group
Resource group commands are not supported by the software stack running on this system. Consult the product documentation for more details. Workaround is to set SSC_CORE_NODR variable to '1' when running setcoremem on SuperCluster T4 after applying the JAN 2018 QFSDP # SSC_CORE_NODR=1 /opt/oracle.supercluster/bin/osc-setcoremem
1.2.21 - Bug 27470477 "osc-setcoremem appears hung after all inputs and before prompting to confirm" Workaround is to set SSC_CORE_NODR variable to '1' during runtime: # SSC_CORE_NODR=1 /opt/oracle.supercluster/bin/osc-setcoremem
1.2.22 - While the ZFS SA controller head are being rebooted as part of the QFSDP application, it is possible that the SuperCluster Virtual Assistant's (SVA) Health Monitor may be running at the same time.
1.2.23 - ZFS SA version 2013.1.8.7.0 and later include ILOM updates for ZS3-* and ZS5-* hardware. controller-h1-storadm:maintenance problem-002> ls Components: component-000 100% controller-h1-storadm: hc://:chassis-mfg=Oracle-Corporation:chassis-name=SUN-FIRE-X4170-M3:chassis-part=7078183:chassis-serial=1234567AB:fru-serial=1234567AB:fru-revision=SUN-FIRE-X4170-M3/chassis=0 (degraded) controller-h1-storadm:maintenance problem-002> The following excerpt from the release notes for 8.7.x ZFS Storage Appliance (https://updates.oracle.com/Orion/Services/download?type=readme&aru=21672399#uploading_update_packages_to_your_appliance) explain this in more detail in the "Automatic Firmware Update for Oracle ILOM and BIOS" section:
1.2.24 - When booting the "Master Control Domain" (where the osc-domcreate package is installed) into the new "SCMU_2018.01" boot environment after running install_smu, there may be significant delays in some of the SMF services coming online. # svcs -xv /site/iodct-upgrade-monitor
svc:/site/iodct-upgrade-monitor:default (IODCT Upgrade Monitor) State: offline* transitioning to online since Fri Feb 02 05:22:08 2018 Reason: Start method is running. See: http://support.oracle.com/msg/SMF-8000-C4 See: /var/svc/log/site-iodct-upgrade-monitor:default.log Impact: 17 dependent services are not running: svc:/milestone/self-assembly-complete:default svc:/system/system-log:default svc:/milestone/multi-user:default svc:/system/ExaWatcher:default svc:/system/boot-config:default svc:/milestone/multi-user-server:default svc:/system/zones:default svc:/system/zones-delay:default svc:/system/zones-install:default svc:/application/iodctq:default svc:/system/compmon:default svc:/system/sp/management:default svc:/network/smtp:sendmail svc:/system/auditd:default svc:/system/console-login:default svc:/network/sendmail-client:default svc:/ldoms/vntsd:default # While the upgrade service is running, you will that the iodct-upgrade-monitor SMF service is being brought online and that several depedent services are offline as a result. If you would like to monitor progress, then you can 'tail' the logfile for the iodct-upgrade-service: # tail -f /var/svc/log/site-iodct-upgrade-monitor\:default.log 104 static files copied to '/var/opt/oracle.supercluster/iodine/static', 46 unmodified. ============================
1.2.25 - Systems that have been initially installed with SuperCluster v2.1 software may have issues when attempting to start the Management Agents in the SuperCluster Virtual Assistant (SVA) after updating to the JAN 2018 QFSDP. The "Management Agents" provide dynamic updates of an IO Domain's state (i.e. Stopped, Starting, Ready of Use, At OpenBoot PROM, etc.), as well as offering the ability to stop & start an IO Domain from the SVA BUI.
After clicking "Start Agent" from the Management Agents section of the SVA BUI, t the hit the following error may be returned in the dialog box at the top of the screen:
"Could Not Start Agent Server on Master Control Domain: ,ksh:/opt/oracle.supercluster/osc-domcreate/iodine/SVAComms/svaserverctrl: cannot execute [Permission denied]"
The underlying reason for this issue is incorrect file permissions & ownership of the /opt/oracle.supercluster/osc-domcreate/iodine &/opt/oracle.supercluster/osc-domcreate/iodine/iodine directories. Affected systems will see the following output when 'pkg verify osc-domcreate' is run: # pkg verify osc-domcreate
PACKAGE STATUS pkg://exa-family/system/platform/supercluster/osc-domcreate ERROR dir: opt/oracle.supercluster/osc-domcreate/iodine ERROR: Group: 'bin (2)' should be 'oscutils (61000)' ERROR: Mode: 0700 should be 0750 dir: opt/oracle.supercluster/osc-domcreate/iodine/iodine ERROR: Group: 'bin (2)' should be 'oscutils (61000)' ERROR: Mode: 0700 should be 0750 # This issue can be resolved by running 'pkg fix osc-domcreate': # pkg fix osc-domcreate Repairing: pkg://exa-family/system/platform/supercluster/osc-domcreate@0.5.11,5.11-2.5.0.1378:20180117T191204Z
3 Documentation AccessibilityFor information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at Access to Oracle Support Oracle customers have access to electronic support through My Oracle Support. For information, visit Oracle Database Quarterly Full Stack Download Patch for SuperCluster (Jan2018) Known Issues Copyright © 2006, 2018, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065. This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark licensed through X/Open Company, Ltd. This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
Attachments This solution has no attachment |
||||||||||||||||||||||||||||
|