Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-72-1543291.1
Update Date:2017-10-05
Keywords:

Solution Type  Problem Resolution Sure

Solution  1543291.1 :   Sun Storage 7000 Unified Storage System: Replication Snapshot Inconsistency seen in the BUI/CLI  


Related Items
  • Sun ZFS Storage 7420
  •  
  • Sun Storage 7110 Unified Storage System
  •  
  • Oracle ZFS Storage ZS3-2
  •  
  • Oracle ZFS Storage ZS4-4
  •  
  • Sun Storage 7210 Unified Storage System
  •  
  • Sun Storage 7410 Unified Storage System
  •  
  • Sun Storage 7310 Unified Storage System
  •  
  • Sun ZFS Storage 7120
  •  
  • Oracle ZFS Storage ZS3-4
  •  
  • Oracle ZFS Storage Appliance Racked System ZS4-4
  •  
  • Sun ZFS Storage 7320
  •  
  • Oracle ZFS Storage ZS3-BA
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
  •  




In this Document
Symptoms
Cause
Solution
References


Created from <SR 3-6664026241>

Applies to:

Sun Storage 7110 Unified Storage System - Version All Versions and later
Sun Storage 7210 Unified Storage System - Version All Versions and later
Sun Storage 7310 Unified Storage System - Version All Versions and later
Sun Storage 7410 Unified Storage System - Version All Versions and later
Sun ZFS Storage 7120 - Version All Versions and later
7000 Appliance OS (Fishworks)

Symptoms

To discuss this information further with Oracle experts and industry peers, we encourage you to review, join or start a discussion in the My Oracle Support Community - Disk Storage ZFS Storage Appliance

Issue is seen at the source head when replication is enabled.

When a scheduled Project level replication is being done, only a "single" project level snapshot is seen in BUI, but the Filesystem inside the project shows "multiple" snapshots.

As an example, currently, on one of our systems we have:

Filesystems:
NAME               SIZE MOUNTPOINT
home1             1.03T /export/nas-a01/home1
share_rw_root      396M /export/nas-a01/share_rw_root
share_rw          1.14M /export/nas-a01/share_rw
share_r            730M /export/nas-a01/share_r
home              1.02M /export/nas-a01/home

nas1:shares nas1> snapshots show
Snapshots:
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-8e

nas1:shares nas1> select home1 snapshots show
Snapshots:
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-89
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-8a
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-8b
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-8c
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-8d
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-8e

nas1:shares nas1> select share_rw_root snapshots show
Snapshots:
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-7d
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-7e
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-7f
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-80
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-81
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-82
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-83
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-84
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-85
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-86
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-87
.rr-ec2cfc6a-5877-63ff-90d1-cb82a69f28ae-88
...

 

On the restricted Solaris shell we can see only the latest snapshot for the filesystem is visible.
So that means BUI/CLI shows incorrect information, and old snapshots does not exists. 

On the Source head :
# ss7120-sin06-c# pwd
 
/export/Repl_fs1/.zfs/snapshot 
ss7120-sin06-c# ls -al 
total 3 
dr-xr-xr-x   3 root     root           3 Jan 22 05:00 . 
dr-xr-xr-x   4 root     root           4 Jan 16 06:36 .. 
drwx------   2 nobody   other          3 Jan 16 07:57 
.rr-bf5b477a-7fda-e3ff-bd82-a47dfa7e9d7a-473 

On the target head: we can see two snapshots per Filesystem which is normal :
# ss7120-sin06-b# zfs list -t snapshot
 NAME                                      USED  AVAIL  REFER  MOUNTPOINT 
pool1/nas-rr-bf5b477a-7fda-e3ff-bd82-a47dfa7e9d7a/Repl@.rr-bf5b477a-7fda-e3ff- 
bd82-a47dfa7e9d7a-472              1K      -    31K  - 
pool1/nas-rr-bf5b477a-7fda-e3ff-bd82-a47dfa7e9d7a/Repl@.rr-bf5b477a-7fda-e3ff- 
bd82-a47dfa7e9d7a-473               0      -    31K  - 
pool1/nas-rr-bf5b477a-7fda-e3ff-bd82-a47dfa7e9d7a/Repl/Repl_fs1@.rr-bf5b477a-7 
fda-e3ff-bd82-a47dfa7e9d7a-472     1K      -   131M  - 
pool1/nas-rr-bf5b477a-7fda-e3ff-bd82-a47dfa7e9d7a/Repl/Repl_fs1@.rr-bf5b477a-7 
fda-e3ff-bd82-a47dfa7e9d7a-473      0      -   131M  -

 

Cause

It is just a cosmetic issue.

The BUI/CLI does not reflect the correct picture and shows all the non-existing snapshots.

So this is not consuming space - its just that the old entries that were not 'cleaned up'.
 

Solution

Final fix for the issue is in Appliance Firmware Release 2013.1.0.1


Although this does not impact any functionality, a cleanup can be done easily by following the workaround below:

1. Edit replication "Action" and save it.
    (No need to really change anything - all bogus replication snapshots in that project will disappear)
or
2.  Just restart the Management interface from CLI -
> maintenance system restart                 ----> this will NOT restart the Appliance, but it will just restart the management interface.

  

References

<BUG:16200513> - INCONSISTENT NUMBER OF SNAPSHOTS FOR SHARES DURING REPLICATION

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback