Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-79-2348803.1
Update Date:2018-02-20
Keywords:

Solution Type  Predictive Self-Healing Sure

Solution  2348803.1 :   Oracle Database Quarterly Full Stack Download Patch for SuperCluster Jan 2018 Known Issues  


Related Items
  • Oracle SuperCluster M7 Hardware
  •  
  • Oracle SuperCluster M6-32 Hardware
  •  
  • SPARC SuperCluster T4-4
  •  
  • Oracle SuperCluster T5-8 Hardware
  •  
  • Oracle SuperCluster T5-8 Half Rack
  •  
  • Oracle SuperCluster Specific Software
  •  
  • Oracle SuperCluster T5-8 Full Rack
  •  
Related Categories
  • PLA-Support>Eng Systems>Exadata/ODA/SSC>SPARC SuperCluster>DB: SuperCluster_EST
  •  




In this Document
Purpose
Details
 Oracle Database Quarterly Full Stack Download Patch for SuperCluster (Jan 2018) Known Issues
 1 Notes and Known Issues
 1.1 SuperCluster Pre-Installation Notes
 1.2 SuperCluster Known Issues for this release
 2 Modification History
 3 Documentation Accessibility
References


Applies to:

Oracle SuperCluster M7 Hardware
Oracle SuperCluster Specific Software
Oracle SuperCluster M6-32 Hardware
SPARC SuperCluster T4-4
Oracle SuperCluster T5-8 Full Rack
Information in this document applies to any platform.

Purpose

 This document lists the known issues for Oracle Database Quarterly Full Stack Download Patch for SuperCluster October 2017. These known issues are in addition to the issues listed in the README for Quarterly Full Stack Download Patch for SuperCluster.

Details

 

Oracle Database Quarterly Full Stack Download Patch for SuperCluster (Jan 2018) Known Issues

 

Platform: Solaris.SPARC64

My Oracle Support 

Released: February 8th, 2018

This document lists the known issues and other notes for Oracle Database Quarterly Full Stack Download Patch for SuperCluster (JAN 2018) Patches. 

This document includes the following sections:

1 Notes and Known Issues

1.1 SuperCluster Pre-Installation Notes

The following are general SuperCluster QFSDP pre-installation notes:

1.1.1 - Installation of patch 22738454 (APR 2016 Quarterly Full Stack Download Patch for Oracle SuperCluster), or a later QFSDP, is a prerequisite to installing this patch.

1.1.2 - Pay particular attention to the installation order prescribed in sub-section 4 ”Required Installation Order for QFSDP Components”, which is part of ”Section 2. Installing Quarterly Full Stack (JAN 2018 - 11.2, 12.2 and 12.1)”.


1.1.3 - Oracle recommends running Exachk before and after performing planned maintenance to review and cross reference collected data against supported version levels and recommended Oracle Exadata best practices. See My Oracle Support <Note 1070954.1> "Oracle Exadata Database Machine exachk or HealthCheck".


1.1.4 - Beginning with the Oct 2016 QFSDP, management of Solaris 11 IDRs is simplified by replacing IDRs with a supercluster-solaris Custom Incorporation (CI). This is a superset of the Solaris 'entire' incorporation, and includes the payloads from the recommended SuperCluster IDRs.

Additional critical fixes may be released after the QFSDP is first published. Please periodically check MOS <Note 2086278.1>, "SuperCluster Recommended Custom Incorporations, IDRs, and CVEs Addressed" for the latest recommended Custom Incorporation (if any) to install with this QFSDP to all Solaris 11 instances. A later Custom Incorporation can be downloaded and added to the SuperCluster's Solaris IPS repository and used by the QFSDP install_smu script in place of the Custom Incorporation included in the QFSDP itself. This saves having to first install the QFSDP and then update to a later recommended Custom Incorporation afterwards.

Please refer to the SCMU README.solaris file for instructions on how to add a downloaded Custom Incorporation version to the Solaris IPS repository.


1.1.5 - Where the instructions in the QFSDP READMEs are in conflict with instructions in the individual patch READMEs of included components, the QFSDP READMEs have precedence in a SuperCluster context.


1.1.6 - Quorum Disk Manager functionality has been introduced with the Jan 2017 QFSDP. For further information, please refer to the SuperCluster documentation on http://docs.oracle.com/cd/E58626_01/index.html and search for "quorum".


1.1.7 - For information on the use of the osc-config-backup utility, including Known Issues, please see MOS <Note 1934129.1>


1.1.8 - If you are doing ZFSSA replication, or have shares off the ZFSSA to external hosts, using the management interface, plan on taking an outage for these services during QFSDP application. Before applying the ZFSSA patch take note of which storage head is servicing such operations and ensure it is the active head when your patching activities are over.


1.1.9 - ZFSSA Deferred Updates should be applied. This is particularly relevant for older SuperCluster T4, T5, M6 systems. See https://docs.oracle.com/cd/E56021_01/html/E55850/goxdn.html#scrolltoc. In particular, "Multiple Initiator Groups per LUN" and "Support for Large Block Sizes" are required for SuperCluster and must be applied. For other Deferred Updates, if the system is being upgraded from a version later than the Deferred Update, apply it as it's applicable to both the "from" and "to" versions and hence won't prevent rollback to the earlier version in the unlikely event that it is necessary. Where the Deferred Update is later than the version "from" which you are upgrading, you may wish to wait several weeks to ensure the version updated "to" is working well, before applying the Deferred Update in the unlikely case there's a need to revert to the version you upgraded "from".


1.1.10 - General SuperCluster documentation updates are now available from 
http://docs.oracle.com/en/engineered-systems/ is no longer included in the QFSDP.


1.1.11 - The JAN 2018 QFSDP delivers v2.5 of the Exa-Family tools, which includes the SuperCluster Virtual Assistant (via the osc-domcreate package). If the JAN QFSDP is being applied to a SuperCluster that uses the SuperCluster Virtual Assistant to provision and manage IO Domains, and the current version of the SuperCluster Virtual Assistant is not currently on v2.4 (delivered with the JUL 2017 & OCT 2017 QFSDPs) then it is critical that the following document be reviewed in detail, prior to applying the JAN 2018 QFSDP:
Pre-requisites for updating the SuperCluster Virtual Assistant to version 2.5.0.1378 via the JAN 2018 QFSDP (Doc ID 2356377.1)

This document includes instructions on how to verify the current version of the SuperCluster Virtual Assistant on the system.


1.1.12 -  For details on the SuperCluster Virtual Assistant v2.5 tool features, please refer to the SuperCluster I/O Domain Administration Guide at https://docs.oracle.com/cd/E58626_01/html/E53172/index.html.


1.1.13 - Please review the issues detailed in the SuperCluster Critical Issues (MOS <Note 1452277.1>) prior to applying this QFSDP.

 

1.2 SuperCluster Known Issues for this release

The following are known issues for this QFSDP release:

1.2.1 - If the JAN QFSDP is being applied to a SuperCluster that uses the SuperCluster Virtual Assistant to provision and manage IO Domains, and the current version of the SuperCluster Virtual Assistant is not currently on v2.4 (delivered with the JUL 2017 & OCT 2017 QFSDPs) then it is critical that the following document be reviewed in detail, prior to applying the JAN 2018 QFSDP:
    Pre-requisites for updating the SuperCluster Virtual Assistant to version 2.5.0.1378 via the JAN 2018 QFSDP (Doc ID 2356377.1)

This document includes instructions on how to verify the current version of the SuperCluster Virtual Assistant on the system.

 

1.2.2 - If the Oct 2016 QFSDP was ever installed on the system and the system configuration contains IO domains, please follow the steps outlined in MOS <Note 2223954.1> "Remediation steps for 25386898 for SuperClusters with the Oct 2016 QFSDP applied" before installing this QFSDP.

 

1.2.3 - Install picl in any Database non global zones using 'pkg install picl' (if not already installed) prior to running opatch to update the Database as opatch now uses jre-8 and installing picl improves performance.

 

1.2.4 - Solaris 11.3 SRU11 and later include LDoms 3.4.0.1 which by default sets verfied boot_policy=warn. Due to Bug 21463646, the warning messages such as the following may be seen in /var/adm/messages on boot:

Aug 3 09:02:47 etc7m7-exa2-n1 unix: [ID 711624 kern.notice] WARNING:
Signature verification of module /usr/kernel/drv/sparcv9/oracleoks failed
Aug 3 09:02:47 etc7m7-exa2-n1 unix: [ID 711624 kern.notice] WARNING:
Signature verification of module /usr/kernel/drv/sparcv9/oracleadvm failed
Aug 3 09:02:48 etc7m7-exa2-n1 unix: [ID 711624 kern.notice] WARNING:
Signature verification of module /usr/kernel/drv/sparcv9/oracleacfs failed

These warnings can be ignored. Do not set verified boot_policy=fail. If desired, the warning messages can be suppressed as follows, but it is not necessary to do so:

On the control domain:
    ldm set-domain boot-policy=none <domain_name>
    save the SP config
Then reboot the guest domain.

 

1.2.5 - Solaris 11.3 SRU11 and later includes LDoms 3.4.0.1 which by default sets verified boot_policy=warn. On Solaris 10 guest domains, "Unsupported bootblk image" messages may be seen (Bug 24825049). These can be ignored.

If desired, the messages can be suppressed as follows, but it is not necessary to do so:

On the control domain:
    ldm set-domain boot-policy=none <S10_domain_name>
    save the SP config
Then reboot the guest domain.

 

1.2.6 - The following messages which may be seen on the console can be ignored (bug 24923073):

<date> <system name> ip: net<number>: DL_BIND_REQ failed: DL_BADADDR
<date> <system name> ip: net<number>: DL_UNBIND_REQ failed: DL_OUTSTATE
1.2.7 - Oracle SuperCluster IO Domains may fail to boot after upgrading to Solaris 11.3 SRU 11 or later if the system has already been configured with more than 127 mac addresses per PF. Please refer to MOS <Note 2235476.1> for details and solution. 

 

1.2.8 - Some customers temporarily modify passwords during maintenance windows, for example to enable Platinum Patching, and set them back when patching is complete. If this is the case, it is recommended to temporarily disable monitoring of the Exadata storage cells in the Oracle Engineered System Hardware Monitoring OESHM) utility or update the passwords. This is necessary as otherwise OESHM attempts to login to the storage cells will fail and, by default, the cells will lock out for 10 minutes after repeated failed login attempts. To temporarily disable storage cell monitoring in OESHM, login to https://<IP of master root domain>:8001, go to the Storage Server tab, select each storage cell in turn, and click the "Stop Monitoring" button and confirm. The cell Hardware Health and Configuration status will change to "In Maintenance". After reverting to the original passwords at the end of the maintenance window, re-enable monitoring of the cells by repeating the process, clicking the "Start Monitoring" button.

 

1.2.9 - When running OEDA, bug 24945782 "STEP 11 : INITIALIZE CLUSTER SOFTWARE HANGING INTERMITTENTLY" may occasionally be encountered. The workaround is simply to run the step again.

 

1.2.10 - Mitigation issues can occur with ZFSSA after updating ssctuner and osc-domcreate to the Jul 2017 QFSDP. For details and workaround, please see MOS <Note 2297850.1.>

 

1.2.11 - In certain cases issues have been observed updating to IB switch 2.2.x-y firmware. Please refer to MOS <Note 2280595.1> "Infiniband switch running 2.2.x will not boot...", <Note 2301054.1> "How to prevent an Infiniband Switch being rendered un-bootable..." and <Note 2202721.1> "Infiniband Gateway Switch Stays In Pre-boot Environment During Upgrade/Reboot" for details and how to determine if a IB switch might be vulnerable to potential IB switch upgrade issues.

 

1.2.12 - If running Solaris Cluster 3.3_u2, then it is you will need to download and install the "Oracle ZFS Storage Appliance Network File System Plug-in for Oracle Solaris Cluster, v1.0.5" with the JAN 2018 QFSDP. Version 1.0.5 of the ZFS SA NFS Plugin will also require Java 7 Update 151 (or later Java 7 Updates) to be installed. For more information on this issue and details of how to install the required components please follow the instructions in MOS <Note 2326715.1>.
Note that with the latest set of Solaris Cluster 3.3_u2 delivered with the JAN 2018 QFSDP, it is no longer required to set Java 7 to be the default version of Java on the target domain, but the appropriate version of Java 7 must still be available on that domain.

 

1.2.13 -  Bug 24429841 (Missing pkg:/system/picl as a dependency from runtime/java/jre-* pkgs) has been seen during OEDA deployment with the JAN 2018 QFSDP. If you encounter this issue, the workaround is to manually install picl packages in DB Zones & DB IO Domains.

 

1.2.14 - Per Bug 27299669 "18.1.2.0.0 & 12.2.1.4.4-[PMGR_FUNCT_CHECK_PING][161] COMMAND NOT FOUND: PING6" the following message may be seen in the patchmgr.log:

"[patchmgr_functions_check_ping][161] Command not found: ping6"
These messages are safe to ignore and this issue will be fixed in the 18.2.0.0.0 version of the cell software.

 

1.2.15 - As a result of Bug 27376820 "BACKPORT FIX FOR 27000656 FROM OPATCH_MAIN TO OPATCH_12.2.0.1.12 CODELINE", patching 12.2.0.1 DB has been seen to take in excess of 5 hours per RAC node.
This issues is fixed in OPatch version 12.2.0.1.12, which is available from MOS. It is recommended that this version of OPatch be used to patch 12.2.0.1 databases to avoid this issue.

 

1.2.16 - Per Bug 27282890 "If ZFSSA cluster heads are out of sync, false dedup messages may occur", ssctuner may warn users if the ZFS SA heads are in a non-clustered configuration.

When applying the JAN 2018 QFSDP, there will be some period of time while the ZFS SA controller head nodes will be out of sync (when one head is upgraded and other is not yet).
In this situation ssctuner may report the following erroneous error message either via /var/adm/messages (syslog) or via email, indicating deduplication is set on ZFSSA:

Dec 17 14:43:10 sc_nodename1 ssctuner: [ID 702911 local0.error] critical: deduplication is set on ZFSSA head node(s). Please disable.

Then you need to check ssctuner SMF log on the reporting node (sc_nodename1 in this example) to see if this is due to the heads being out of sync. If you see this after the dedup error message, there is no dedup issue but your storage heads are out of sync (and if the message occurs again in 24 hours someone should address that issue) :
From /var/svc/log/site-application-sysadmin-ssctuner\:default.log -

[ Dec 17 14:43:10 critical: deduplication is set on ZFSSA head node(s). Please disable. ]
[ Dec 17 14:43:10 Dedup is enabled
This controller is running a different software version from its cluster
peer. Configuration changes made to either controller will not be propagated
to its peer while in this state, and may be undone when the two software
versions are synchronized. Please see the appliance documentation for more
information. ]

 

1.2.17 - Bug 27134424 "ip interconnect inaccessible after SP reset" has been seen intermittently while testing the JAN 2018 QFSDP.
As a result of this issue, when the SP is reset, you may see "FMD-8000-ET" fault for Problem class : alert.oracle.solaris.fmd.ip-transport.link-down" and the following message may be seen when attempting to run the 'ldm ls-spconfig' command:

# ldm ls-spconfig
The requested operation could not be performed because the communication
channel between the LDoms Manager and the system controller is down.
The ILOM interconnect may be disabled or down (see ilomconfig(1M)).
#

To resolve this issue, please first try resetting the ILOM interconnect from the control domain(s) exhibiting the issue:

# ilomconfig disable interconnect
# ilomconfig enable interconnect

Failing that, issuing a 'reset /SP' on the affected node's ILOM has resolved the problem.

 

1.2.18 - Per Bug 27376916 "Seeing drive offline messages and offlined disk in format for unpresent luns", the messages similar to the following have been seen on the console of the domain running the SuperCluster Virtual Assistant when "Thawing" IO Domains:

Feb 4 15:04:20 node0101 scsi: WARNING: /scsi_vhci/ssd@g600144f09a945d5600005a76447a0007 (ssd12):
Feb 4 15:04:20 node0101 drive offline

In addition, running the 'format' command in that domain will result in output similar to the following:

root@node0101:~# format
Searching for disks...done
  AVAILABLE DISK SELECTIONS:
......

    2. c0t600144F0DB7D639400005A77A5B70003d0 <drive not available>
    <=====>
    /scsi_vhci/ssd@g600144f0db7d639400005a77a5b70003
.......

These messages can be ignored and will no longer appear on the next reboot of the domain running the SuperCluster Virtual Assistant.

 

1.2.19 - Bug 27275834 "iscsi: WARNING: connection/login failed/service or target is not operational" has been seen on domains that are running Solaris Cluster and using a quorum disk from the ZFS SA, when updating the ZFS SA as part of the JAN 2018 QFSDP process.
During upgrade of ZFS SA when ZFSSA heads are rebooted, the following message has been seen on the console of  domains running Solaris Cluster:

Dec 15 11:10:32 node0101 iscsi: WARNING: iscsi connection(117) login failed - iSCSI service or target is not currently operational. (0x03/0x01)
Dec 15 11:10:32 node0101 iscsi: WARNING: iscsi connection(117) login failed - iSCSI service or target is not currently operational. (0x03/0x01)

While these messages are safe to ignore, an intermittent Solaris Cluster Global Zone hang has been experienced at the same time. In case of domain hang, the workaround is to reboot the affected domain.

 

1.2.20 - Bug 27470202 "osc-setcoremem: Resource group commands are not supported warning during runtime"
When osc-setcoremem is ran on SuperCluster T4 systems, error messages similar to the following may be returned:

# /opt/oracle.supercluster/bin/osc-setcoremem -type core -res 8/256:8/256:8/256:8/256
Resource group commands are not supported by the software stack running
on this system. Consult the product documentation for more details.

 

The reason for this is the underlying ldm command "ldm ls-rsrc-group" command, which is not supported on the T4 platform is being erroneously called:

# ldm ls-rsrc-group
Resource group commands are not supported by the software stack running
on this system. Consult the product documentation for more details.

Workaround is to set SSC_CORE_NODR variable to '1' when running setcoremem on SuperCluster T4 after applying the JAN 2018 QFSDP

# SSC_CORE_NODR=1 /opt/oracle.supercluster/bin/osc-setcoremem

 

1.2.21 -  Bug 27470477 "osc-setcoremem appears hung after all inputs and before prompting to confirm"
"Segmentation Fault and core dump" messages have been see when trying to run setcoremem to the change memory allocations across domains.
Other behavior seen is that the setcoremem tool appears to hang right after accepting all required inputs and before prompting the user for confirmation to make the changes.
The root cause of this behavior is osc-setcoremem's attempts to leverage ldm's dynamic reconfiguration functionality.

Workaround is to set SSC_CORE_NODR variable to '1' during runtime:
  

# SSC_CORE_NODR=1 /opt/oracle.supercluster/bin/osc-setcoremem

 

1.2.22 - While the ZFS SA controller head are being rebooted as part of the QFSDP application, it is possible that the SuperCluster Virtual Assistant's (SVA) Health Monitor may be running at the same time.
As one of the SVA Health Monitor checks connectivity to both ZFS SA controller heads, the inability to connect to the rebooting controller head will generate a Health Monitor check failure. To proceed with the use of SVA when this situation occurs, this failed Health Check needs to be cleared in the SVA Health Monitor.

 

1.2.23 - ZFS SA version 2013.1.8.7.0 and later include ILOM updates for ZS3-* and ZS5-* hardware.
To complete the application of the ILOM updates to the controller heads, a reboot of the ILOM for each controller head must be done manually.
If ILOM is seen to be running at a "downrev" version, then the a problem report similar to the following will be generated in ZFSSA:

controller-h1-storadm:maintenance problem-002> ls
Properties:
uuid = aebbdd77-22b1-424e-b17f-f6f772b05d83
code = AK-8004-HU
diagnosed = 2017-9-27 07:22:07
phoned_home = never
severity = Major
type = Defect
url = http://support.oracle.com/msg/AK-8004-HU
description = The chassis '1234567AB' is running downrev
Platform Firmware.
impact = The Platform Firmware includes the firmware
for the Service Processor as well as the
System Board firmware. Running downrev
firmware can expose this appliance to
security and stability issues.
response = None.
action = Reboot to install the latest firmware.

Components:

component-000 100% controller-h1-storadm: hc://:chassis-mfg=Oracle-Corporation:chassis-name=SUN-FIRE-X4170-M3:chassis-part=7078183:chassis-serial=1234567AB:fru-serial=1234567AB:fru-revision=SUN-FIRE-X4170-M3/chassis=0 (degraded)
Manufacturer: Oracle
Model: Oracle ZFS Storage ZS3-ES
Serial number: 1234567AB
Revision: SUN-FIRE-X4170-M3

controller-h1-storadm:maintenance problem-002>

The following excerpt from the release notes for 8.7.x ZFS Storage Appliance (https://updates.oracle.com/Orion/Services/download?type=readme&aru=21672399#uploading_update_packages_to_your_appliance) explain this in more detail in the  "Automatic Firmware Update for Oracle ILOM and BIOS" section:

Under certain circumstances, upgrading to software release OS8.7.0 issues a major alert for downrev platform firmware in the Active Problems area of the software. To automatically update Oracle ILOM and/or the BIOS, reboot the appliance as described in "Updating the Platform Firmware" in the online help or in the Oracle ZFS Storage Appliance Customer Service Manual for Release OS8.7.0. This section also contains tasks for checking the current platform firmware versions and determining if new firmware is needed. When rebooting from the BUI, use new power option "Update Platform Firmware and reboot." Similarly, the platform firmware is automatically updated with CLI command maintenance system reboot. If a clustered configuration, perform this procedure on each controller, but not simultaneously. Before updating the peer controller, confirm that the primary controller has completed the firmware update and has joined the cluster configuration. After both controllers have completed the update, confirm that the cluster is highly available.

 

1.2.24 - When booting the "Master Control Domain" (where the osc-domcreate package is installed) into the new "SCMU_2018.01" boot environment after running install_smu, there may be significant delays in some of the SMF services coming online.
While the migration of the sqlite3 to mysql database should normally complete within 3-5 minutes, in some cases where there the sqlite3 database (/production/databases/iodine/iodine.db) is very large (over 100MB) it will take longer to complete the database migration. For example, it has taken up to 40 minutes for a system with a 770MB sqlite3 database to complete the database migration process.
It is possible to verify that the upgrade service that controls this database migration is delaying a system from bringing all services online by logging into the system via ssh (but not via console login) and running the following command:

# svcs -xv /site/iodct-upgrade-monitor
svc:/site/iodct-upgrade-monitor:default (IODCT Upgrade Monitor)
State: offline* transitioning to online since Fri Feb 02 05:22:08 2018
Reason: Start method is running.
See: http://support.oracle.com/msg/SMF-8000-C4
See: /var/svc/log/site-iodct-upgrade-monitor:default.log
Impact: 17 dependent services are not running:
svc:/milestone/self-assembly-complete:default
svc:/system/system-log:default
svc:/milestone/multi-user:default
svc:/system/ExaWatcher:default
svc:/system/boot-config:default
svc:/milestone/multi-user-server:default
svc:/system/zones:default
svc:/system/zones-delay:default
svc:/system/zones-install:default
svc:/application/iodctq:default
svc:/system/compmon:default
svc:/system/sp/management:default
svc:/network/smtp:sendmail
svc:/system/auditd:default
svc:/system/console-login:default
svc:/network/sendmail-client:default
svc:/ldoms/vntsd:default
#

While the upgrade service is running, you will that the iodct-upgrade-monitor SMF service is being brought online and that several depedent services are offline as a result.

If you would like to monitor progress, then you can 'tail' the logfile for the iodct-upgrade-service:

  

# tail -f /var/svc/log/site-iodct-upgrade-monitor\:default.log
.
..
...
Copying '/usr/lib/python2.7/site-packages/rest_framework/static/rest_framework/docs/js/highlight.pack.js'
Copying '/usr/lib/python2.7/site-packages/rest_framework/static/rest_framework/docs/js/jquery-1.10.2.min.js'
Copying '/usr/lib/python2.7/site-packages/rest_framework/static/rest_framework/css/bootstrap.min.css'
Copying '/usr/lib/python2.7/site-packages/rest_framework/static/rest_framework/css/default.css'
Copying '/usr/lib/python2.7/site-packages/rest_framework/static/rest_framework/css/prettify.css'
Copying '/usr/lib/python2.7/site-packages/rest_framework/static/rest_framework/css/bootstrap-tweaks.css'

104 static files copied to '/var/opt/oracle.supercluster/iodine/static', 46 unmodified.
Creating default settings allowed file
Changed state of iodct-httpd from "disabled" using cmd "enable", resulting state was "offline*".
Changed state of iodctq from "disabled" using cmd "enable", resulting state was "offline".
Changed state of svacomserver from "disabled" using cmd "enable", resulting state was "offline".
Changed state of svacomagent from "disabled" using cmd "enable", resulting state was "offline".

============================
= Upgrade process complete =
============================
[ Feb 2 05:26:56 Method "start" exited with status 0. ]

 

1.2.25 - Systems that have been initially installed with SuperCluster v2.1 software may have issues when attempting to start the Management Agents in the SuperCluster Virtual Assistant (SVA) after updating to the JAN 2018 QFSDP.

The "Management Agents" provide dynamic updates of an IO Domain's state (i.e. Stopped, Starting, Ready of Use, At OpenBoot PROM, etc.), as well as offering the ability to stop & start an IO Domain from the SVA BUI.
After clicking "Start Agent" from the Management Agents section of the SVA BUI, t the hit the following error may be returned in the dialog box at the top of the screen:
"Could Not Start Agent Server on Master Control Domain: ,ksh:/opt/oracle.supercluster/osc-domcreate/iodine/SVAComms/svaserverctrl: cannot execute [Permission denied]"

 The underlying reason for this issue is incorrect file permissions & ownership of the /opt/oracle.supercluster/osc-domcreate/iodine &/opt/oracle.supercluster/osc-domcreate/iodine/iodine directories. Affected systems will see the following output when 'pkg verify osc-domcreate' is run:

# pkg verify osc-domcreate
PACKAGE STATUS
pkg://exa-family/system/platform/supercluster/osc-domcreate ERROR
dir: opt/oracle.supercluster/osc-domcreate/iodine
ERROR: Group: 'bin (2)' should be 'oscutils (61000)'
ERROR: Mode: 0700 should be 0750
dir: opt/oracle.supercluster/osc-domcreate/iodine/iodine
ERROR: Group: 'bin (2)' should be 'oscutils (61000)'
ERROR: Mode: 0700 should be 0750
#
 
This issue can be resolved by running 'pkg fix osc-domcreate':

# pkg fix osc-domcreate
Packages to fix: 1
Create boot environment: No
Create backup boot environment: Yes

Repairing: pkg://exa-family/system/platform/supercluster/osc-domcreate@0.5.11,5.11-2.5.0.1378:20180117T191204Z
PACKAGE STATUS
pkg://exa-family/system/platform/supercluster/osc-domcreate ERROR
dir: opt/oracle.supercluster/osc-domcreate/iodine
ERROR: Group: 'bin (2)' should be 'oscutils (61000)'
ERROR: Mode: 0700 should be 0750
dir: opt/oracle.supercluster/osc-domcreate/iodine/iodine
ERROR: Group: 'bin (2)' should be 'oscutils (61000)'
ERROR: Mode: 0700 should be 0750
PHASE ITEMS
Updating modified actions 2/2
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Updating package cache 2/2
#

 

 

2 Modification History

Table 1 lists the modification history for this document.

Table 1 Modification History

DateModification

Oct 2017

  • Released

3 Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/us/corporate/accessibility/index.html.

Access to Oracle Support

Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/support/contact.html or visit http://www.oracle.com/us/corporate/accessibility/support/index.html if you are hearing impaired.


Oracle Database Quarterly Full Stack Download Patch for SuperCluster (Jan2018) Known Issues

Copyright © 2006, 2018, Oracle and/or its affiliates. All rights reserved.

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark licensed through X/Open Company, Ltd.

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

 

 


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback