Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-72-1545460.1
Update Date:2018-01-08
Keywords:

Solution Type  Problem Resolution Sure

Solution  1545460.1 :   Sun Storage 7000 Unified Storage System: RMAN Backup Is Not Writing To One Of The NFS (ZFS) Mount Points  


Related Items
  • Sun ZFS Storage 7420
  •  
  • Exadata Database Machine X2-2 Full Rack
  •  
  • Sun ZFS Storage 7320
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
  •  




In this Document
Symptoms
Cause
Solution
References


Created from <SR 3-7036153741>

Applies to:

Exadata Database Machine X2-2 Full Rack - Version All Versions and later
Sun ZFS Storage 7420 - Version All Versions and later
Sun ZFS Storage 7320 - Version All Versions and later
7000 Appliance OS (Fishworks)

Symptoms

To discuss this information further with Oracle experts and industry peers, we encourage you to review, join or start a discussion in the My Oracle Support Community - Disk Storage ZFS Storage Appliance

Out of 8 backup mounts one mount point is not utilized by RMAN while taking backup in Exadata X2-2 full rack.


Though free space is there and allocated channel with service RMAN was skipping and writing backup pieces into other mount point.

Tried taking backup with problematic node service and got below error message.

RMAN-03009: failure of backup command on ch06 channel at 04/06/2013 12:38:50
ORA-19502: write error on file "/export/zfs/ods/exa-bkp5/backups/odsoltp/oltp_sqo6d8j6_1_1", block number 176128 (block size=512)
ORA-17500: ODM err:KGNFS WRITE FAIL:Disk quota exceeded
ORA-19502: write error on file "/export/zfs/ods/exa-bkp5/backups/odsoltp/oltp_sqo6d8j6_1_1", block number 32768 (block size=512)
ORA-17500: ODM err:KGNFS WRITE FAIL:Disk quota exceeded

 

Cause

Customer says that there are several NFS servers used for RMAN backups which are Solaris servers using ZFS.

When taking a backup to the share "bkp5", the backup is actually stored in another NFS server. Customer observes this problem only on the share "bkp5", others run fine.

Customer is using ZFS appliance on which 2 pools of 4 nfs shares each(total of 8 nfs shares) created. These are mounted on exadata machine.

Customer is facing issues while taking backup with nfs share 5(bkp5). Irrespective of nfs share 4(either mounted or unmounted), backing up using nfs share 5 - bkp5 is always directing the backups to nfs share 4(bkp4).


There is no quota set up on the server.

While zfssa-head-1 has shares that are quite full, that is not the case for zfssa-head2, which owns the share exa_bkp5.

I don't see a logical reason at this time for there to be quota errors against the share exa_bkp5.

egrep "used |available| quota" cifs.out |grep exa| less
zfssa-head2-pool/local/ods/exa_bkp5 used 9.84T -
zfssa-head2-pool/local/ods/exa_bkp5 available 7.16T -
zfssa-head2-pool/local/ods/exa_bkp5 quota 17T local
zfssa-head2-pool/local/ods/exa_bkp6 used 14.7T -
zfssa-head2-pool/local/ods/exa_bkp6 available 2.32T -
zfssa-head2-pool/local/ods/exa_bkp6 quota 17T local
zfssa-head2-pool/local/ods/exa_bkp7 used 14.0T -
zfssa-head2-pool/local/ods/exa_bkp7 available 2.97T -
zfssa-head2-pool/local/ods/exa_bkp7 quota 17T local
zfssa-head2-pool/local/ods/exa_bkp8 used 14.6T -
zfssa-head2-pool/local/ods/exa_bkp8 available 2.45T -
zfssa-head2-pool/local/ods/exa_bkp8 quota 17T local



Confirmed that There are No quota limits set for zfssa-head-2

zfssa-head-2:shares ods/exa_bkp5> users
zfssa-head-2:shares ods/exa_bkp5 users> show
Users:

USER NAME USAGE QUOTA
user-000 300 9.83T -
user-001 root 12.9G -

 
Unable to find any log messages errors or configuration issues here to explain the behaviour seen.

There is no issue found on the ZFSSA side.

Solution

Customer finally inspected his (direct-nfs) /etc/oranfstab:

#######################################################################
# zfssa-head-1:zfssa-head1-pool
server: 192.168.10.25 # IPMP1 on zfssa-h1
local: 192.168.10.1 path: 192.168.10.24 # local IB address multipath1 on cnode1
local: 192.168.10.1 path: 192.168.10.26 # local IB address multipath2 on cnode1
local: 192.168.10.1 path: 192.168.10.27 # local IB address multipath3 on cnode1
dontroute: # outgoing messages not routed by OS
# shares for head1-pool :
export: /export/zfs/ods/exa-bkp1 mount: /export/zfs/ods/exa-bkp1
export: /export/zfs/ods/exa-bkp2 mount: /export/zfs/ods/exa-bkp3 <<<< Strange
export: /export/zfs/ods/exa-bkp3 mount: /export/zfs/ods/exa-bkp4 <<<< Strange
export: /export/zfs/ods/exa-bkp4 mount: /export/zfs/ods/exa-bkp5 <<<< Double mount !!!
  
# zfssa-head-2:zfssa-head2-pool
server: 192.168.10.28 # IPMP1 on zfssa-h2
local: 192.168.10.1 path: 192.168.10.29 # local IB address multipath1 on cnode2
local: 192.168.10.1 path: 192.168.10.30 # local IB address multipath2 on cnode2
local: 192.168.10.1 path: 192.168.10.31 # local IB address multipath3 on cnode2
dontroute: # outgoing messages not routed by OS
# shares for head2-pool :
export: /export/zfs/ods/exa-bkp5 mount: /export/zfs/ods/exa-bkp5 <<<< Double mount !!!
export: /export/zfs/ods/exa-bkp6 mount: /export/zfs/ods/exa-bkp6
export: /export/zfs/ods/exa-bkp7 mount: /export/zfs/ods/exa-bkp7
export: /export/zfs/ods/exa-bkp8 mount: /export/zfs/ods/exa-bkp8
#######################################################################


As you can see the exa-bkp5 gets mounted twice: once from head1 and once from head2.


The last three 'export' lines for head1 are wrong if you compare them to the 'export' lines of head2.

Customer corrected the /etc/oranfstab and issue was fixed.

All working as expected now.

 

 

Checked for Currency - 15-Apr-2017

 

References

<NOTE:1392492.1> - Oracle ZFS Storage Appliance: Performance Issue when Pool is almost Full

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback