Asset ID: |
1-72-2020021.1 |
Update Date: | 2017-03-19 |
Keywords: | |
Solution Type
Problem Resolution Sure
Solution
2020021.1
:
SuperCluster Zone Attach Gets suid Error
Related Items |
- SPARC SuperCluster T4-4 Full Rack
- Oracle SuperCluster T5-8 Full Rack
- Solaris Operating System
- Oracle SuperCluster T5-8 Half Rack
- SPARC SuperCluster T4-4 Half Rack
- Oracle SuperCluster M6-32 Hardware
|
Related Categories |
- PLA-Support>Eng Systems>Exadata/ODA/SSC>SPARC SuperCluster>DB: SuperCluster_EST
|
Created from <SR 3-10895692861>
Applies to:
Oracle SuperCluster T5-8 Half Rack - Version All Versions and later
SPARC SuperCluster T4-4 Full Rack - Version All Versions and later
SPARC SuperCluster T4-4 Half Rack - Version All Versions and later
Oracle SuperCluster T5-8 Full Rack - Version All Versions and later
Oracle SuperCluster M6-32 Hardware - Version All Versions and later
Information in this document applies to any platform.
Symptoms
SuperCluster Solaris 11 Application Ldom has two zones in unavailable state
# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- orlt4db02z2 unavailable /zoneHome/orlt4db02z2 solaris excl <==bad
Tried to attach the Zone give below error
zoneadm -z orlt4db02z2 attach
Zonepath /zoneHome/orlt4db02z2 is on a nosuid file system.
Zpool showing online
zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
orlt4db02z1 299G 20.6G 278G 6% 1.00x ONLINE -
orlt4db02z2 164G 22.8G 141G 13% 1.00x ONLINE -
orlt4db02z3 164G 41.3G 123G 25% 1.00x ONLINE -
rpool 278G 134G 144G 48% 1.00x ONLINE -
u01-pool 278G 198K 278G 0% 1.00x ONLINE -
Cause
The zpools did not import correctly
Solution
If this happens export and import the zfs data set for the zone and then attach the zone
zfs list |grep testzone1
zfs list |grep orlt4db02z2
orlt4db02z2 22.8G 139G 32K /orlt4db02z2
orlt4db02z2/orlt4db02z2 3.63G 41.4G 35K /zoneHome/orlt4db02z2
orlt4db02z2/orlt4db02z2/rpool 3.63G 41.4G 31K /zoneHome/orlt4db02z2/root/rpool
orlt4db02z2/orlt4db02z2/rpool/ROOT 3.63G 41.4G 31K legacy
orlt4db02z2/orlt4db02z2/rpool/ROOT/solaris-0 3.63G 41.4G 3.58G /zoneHome/orlt4db02z2/root
orlt4db02z2/orlt4db02z2/rpool/ROOT/solaris-0/var 54.6M 41.4G 54.6M /zoneHome/orlt4db02z2/root/var
orlt4db02z2/orlt4db02z2/rpool/VARSHARE 45K 41.4G 45K /zoneHome/orlt4db02z2/root/var/share
orlt4db02z2/orlt4db02z2/rpool/export 419K 41.4G 32K /zoneHome/orlt4db02z2/root/export
orlt4db02z2/orlt4db02z2/rpool/export/home 387K 41.4G 32K /zoneHome/orlt4db02z2/root/export/home
orlt4db02z2/orlt4db02z2/rpool/export/home/oracle 355K 41.4G 355K /zoneHome/orlt4db02z2/root/export/home/oracle
orlt4db02z2/orlt4db02z2DB 19.1G 139G 31K /orlt4db02z2/orlt4db02z2DB
orlt4db02z2/orlt4db02z2DB/u01 19.1G 101G 19.1G legacy
If this happens export and import the zfs data set for the zone and then attach the zone
zpool export orlt4db02z2
zpool export orlt4db02z2
zpool import orlt4db02z2
zpool import orlt4db02z2
zoneadm -z orlt4db02z2 attach
zoneadm list -cv
zoneadm -z orlt4db02z2 attach
zoneadm list -cv
zoneadm -z orlt4db02z2 boot
zoneadm -z orlt4db02z2 boot
zoneadm list -cv
Caution: Exporting the complete pool may affect the other zones, if other zone's root existed on the same pool with different data set
Attachments
This solution has no attachment