Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1392082.1
Update Date:2018-01-05
Keywords:

Solution Type  Technical Instruction Sure

Solution  1392082.1 :   Sun Storage 7000 Unified Storage System: How to free some space in the 'system' pool  


Related Items
  • Sun ZFS Storage 7420
  •  
  • Sun ZFS Storage 7420
  •  
  • Sun Storage 7110 Unified Storage System
  •  
  • Sun Storage 7210 Unified Storage System
  •  
  • Sun Storage 7410 Unified Storage System
  •  
  • Sun Storage 7310 Unified Storage System
  •  
  • Sun ZFS Storage 7120
  •  
  • Sun ZFS Storage 7320
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
  •  
  • _Old GCS Categories>Sun Microsystems>Storage - Disk>Unified Storage
  •  




In this Document
Goal
Solution
 Support Bundle
 Analytics
 Previous Software Releases
 Logs


Applies to:

Sun Storage 7210 Unified Storage System - Version All Versions and later
Sun Storage 7310 Unified Storage System - Version All Versions and later
Sun Storage 7410 Unified Storage System - Version All Versions and later
Sun ZFS Storage 7120 - Version All Versions and later
Sun ZFS Storage 7320 - Version All Versions and later
7000 Appliance OS (Fishworks)
NAS head revision : [not dependent]
BIOS revision : [not dependent]
ILOM revision : [not dependent]
JBODs Model : [not dependent]
CLUSTER related : [not dependent]

Goal

In the lifecycle of the NAS head, the system pool might be filled by logs, support bundles, analytics datasets and even previous releases. This document describes how to find where the space can be freed in the system pool. Some information is widely applicable to any ZFS filesystems using snapshots and clones.

Solution

Support Bundle

Every time a support bundle is collected, a gzip-ed tarball is created in the system pool. ... until uploaded or destroyed these are located in /var/ak/bundles directory.
It may contain some process core dumps or even a kernel dump images known to be huge in size (up to 10GB). Normally, once the core dump has been sent to Oracle Support, it is deleted from the system.
When the NAS head is not connected to the internet, customers can retrieve the bundle from the graphical interface (BUI) but this will not delete it from the system.
To check if some bundles are present on a system, do the following :
cli> maintenance system bundles show
Bundles:
BUNDLE                                                      STATUS            PROGRESS
ak.e738b11b-6000-ef9e-898a-8231a1ed87c3.tar.gz              Uploading          0%

You can remove the support bundle, as follows

zs7420-tvp540-a-h2-mgmt:> maintenance system bundles destroy e738b11b-6000-ef9e-898a-8231a1ed87c3
This will destroy "bundle e738b11b-6000-ef9e-898a-8231a1ed87c3".

Analytics

Analytics data collection is known to be a system pool space consumer. You can check how big the analytics are, as follows :
zs7420-tvp540-a-h2-mgmt:> analytics datasets show
Datasets:

DATASET     STATE   INCORE ONDISK NAME
dataset-000 active   1.99M   888M arc.accesses[hit/miss]
dataset-001 active    893K  1.13G arc.l2_accesses[hit/miss]
dataset-019 active   14.6M  6.31G io.ops[disk]
dataset-031 active   27.0M  7.22G nfs3.ops[latency]
dataset-034 active   35.2M  6.10G nfs3.ops[share]
If some Analytics datasets are getting big, you can destroy them as follows :

zs7420-tvp540-a-h2-mgmt:> analytics datasets destroy dataset-031
This will destroy "dataset-031". Are you sure? (Y/N)
Note, this may take some time until the CLI prompt returns (several minutes), and you might have to recreate the dataset from scratch (if it is required for proper bundle collections). You can safely destroy any datasets which the bundle and the status screen do not need. For example, [latency], [client], [share], and [file] drill down datasets can (and should) all be deleted if not being used.

If the system is running code which has the 'analytics retention' functionality (2011.1.3.0 or newer), adjust the data retention time of the datasets:
nas3b:analytics settings> show
Properties:
            retain_second_data = 1 weeks (uncommitted)
            retain_minute_data = 1 months
              retain_hour_data = 2 months

nas3b:analytics settings> set retain_second_data=336
            retain_second_data = 2 weeks (uncommitted)
nas3b:analytics settings> show
Properties:
            retain_second_data = 2 weeks (uncommitted)
            retain_minute_data = 1 months
              retain_hour_data = 2 months

nas3b:analytics settings> commit

 

Previous Software Releases

During the life cycle of a Sun Storage 7000 Unified Storage System, some firmware updates may have been performed to a new Supported Software Release to take benefit of the latest implemented enhancements. Keeping a previous software release might be useful after an upgrade is done - to check everything behaves as expected for a certain period. If the system's boot filesystem becomes corrupted for any rare reasons, it is strongly recommended to keep one previous version so the affected head of the appliance can be rolled back to this earlier version for emergency maintenance - via the grub menu.
If some Analytics data has been collected in a previous software release and the NAS head has been upgraded to a newer software release, the deletion of datasets will not remove any data created before the upgrade. This is because the new software release is based on snapshots and clones of the previous release(s). If something is deleted on the current NAS release, the Analytics created before the upgrade will remain in the previous release. The final solution is to destroy the previous software release.
Do as follows :
maintenance system updates> show
Updates:

UPDATE                           DATE                      STATUS
ak-nas@2010.08.17.4.2,1-1.37     2011-9-19 17:37:58        previous
ak-nas@2011.04.24.0.0,1-0.9      2011-9-22 01:35:13        current
...
zs7420-tvp540-a-h2-mgmt:maintenance system updates> destroy ak-nas@2010.08.17.4.2,1-1.37
This will destroy the update "ak-nas@2010.08.17.4.2,1-1.37".

Are you sure? (Y/N)

More detailed information is provided here for TSC support engineers :
This NAS head is running 2011.1 software release based on 1.37 software release (snapshot). File /var/ak/stash/fred/null.out has been created before the upgrade to 2011.1.

# zfs list -o space -t all| egrep "running/stash|AVAIL"
NAME                                                                            AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
system/ak-nas-2010.08.17.4.2_1-1.37/running/stash                                187G  7.32G         0   7.32G              0          0
system/ak-nas-2010.08.17.4.2_1-1.37/running/stash@ak-nas-2011.04.24.0.0_1-0.9       -      0         -       -              -          -
system/ak-nas-2011.04.24.0.0_1-0.9/running/stash                                 187G  33.7M         0   33.7M              0          0

# zpool list system
NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
system   464G   249G   215G    53%  1.00x  ONLINE  -

# pwd
/var/ak/stash/fred
# df -h .
Filesystem             size   used  avail capacity  Mounted on
system/ak-nas-2011.04.24.0.0_1-0.9/running/stash
                       457G   7.3G   191G     4%    /var/ak/stash

# ls -lh /var/ak/stash/fred/null.out
-rw-r--r--   1 root     root        7.3G Jan  6 14:15 /var/ak/stash/fred/null.out

# rm null.out

Doing the "rm null.out" does not change anything in the system pool usage (see "USED" col and zpool list) because the current data is a clone of data available in 1.37.

zs7420-tvp540-a-h2-mgmt# zfs get origin system/ak-nas-2011.04.24.0.0_1-0.9/running/stash
NAME                                              PROPERTY  VALUE                                                                          SOURCE
system/ak-nas-2011.04.24.0.0_1-0.9/running/stash  origin    system/ak-nas-2010.08.17.4.2_1-1.37/running/stash@ak-nas-2011.04.24.0.0_1-0.9  -

zs7420-tvp540-a-h2-mgmt# zfs list -o space -t all| egrep "running/stash|AVAIL"
NAME                                                                           AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
system/ak-nas-2010.08.17.4.2_1-1.37/running/stash                               191G  7.32G         0   7.32G              0          0
system/ak-nas-2010.08.17.4.2_1-1.37/running/stash@ak-nas-2011.04.24.0.0_1-0.9      -      0         -       -              -          -
system/ak-nas-2011.04.24.0.0_1-0.9/running/stash                                191G  35.2M         0   35.2M              0          0

zs7420-tvp540-a-h2-mgmt# zpool list system
NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
system   464G   249G   215G    53%  1.00x  ONLINE  -

zs7420-tvp540-a-h2-mgmt# pwd
/var/ak/stash/fred


The "df -h" command shows we have freed some space - but the null.out file is still present on the 1.37 snapshot. To get more free space, we have to destroy the previous releases.

zs7420-tvp540-a-h2-mgmt# df -h .
Filesystem             size   used  avail capacity  Mounted on
system/ak-nas-2011.04.24.0.0_1-0.9/running/stash
                       457G    75M   191G     1%    /var/ak/stash

The only way to get some free space is to destroy the previous release :
maintenance system updates> destroy ak-nas@2010.08.17.4.2,1-1.37
This will destroy the update "ak-nas@2010.08.17.4.2,1-1.37".
Are you sure? (Y/N)

zs7420-tvp540-a-h2-mgmt# zfs list -o space -t all| egrep "running/stash|AVAIL"
NAME                                                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
system/ak-nas-2011.04.24.0.0_1-0.9/running/stash           202G  75.8M         0   75.8M              0          0

zs7420-tvp540-a-h2-mgmt# zpool list system
NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
system   464G   240G   224G    51%  1.00x  ONLINE  -


Note: Trying to manually mount the previous release and destroy the files will not help as the data will move from the dataset to its immediate snapshot :

zs7420-tvp540-a-h2-mgmt# mount -F zfs system/ak-nas-2010.08.17.4.2_1-1.37/running/stash /tmp/fred
zs7420-tvp540-a-h2-mgmt# cd /tmp/fred/fred
zs7420-tvp540-a-h2-mgmt# df -h .
Filesystem             size   used  avail capacity  Mounted on
system/ak-nas-2010.08.17.4.2_1-1.37/running/stash
                       457G   7.3G   191G     4%    /tmp/fred
zs7420-tvp540-a-h2-mgmt# rm null.out <even if removed on 2011.1 filesystem, this shows the file is still present on the previous release
zs7420-tvp540-a-h2-mgmt# df -h .
Filesystem             size   used  avail capacity  Mounted on
system/ak-nas-2010.08.17.4.2_1-1.37/running/stash
                       457G    71M   191G     1%    /tmp/fred

zs7420-tvp540-a-h2-mgmt# zfs list -o space -t all| egrep "running/stash|AVAIL"
NAME                                                                           AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
system/ak-nas-2010.08.17.4.2_1-1.37/running/stash                               191G  7.32G     7.25G   70.9M              0          0
system/ak-nas-2010.08.17.4.2_1-1.37/running/stash@ak-nas-2011.04.24.0.0_1-0.9      -  7.25G         -       -              -          - <<<
system/ak-nas-2011.04.24.0.0_1-0.9/running/stash                                191G  36.0M         0   36.0M              0          0


Finally, we may have to mount the snapshot as well and do some cleaning but as a snapshot is 'read only' by nature, we won't be able to remove anything inside.

Logs

The Sun Storage 7000 Unified Storage System collects many logs for support purposes. The only way to check this and do some cleaning is to open a Service Request with Oracle System Support.


For TSC only,
You can remove any old (and large) log file from /var/ak/logs. Pay attention to the one ending by a digit, move them to /var/tmp before deleting them. Do not delete the files with no ending digits. The old files similar to "20110921114213.not_terminated.unknown" can also be removed, provided there are no issues with the customer accessing/running commands in the shell.
rm /var/ak/logs/*.??.??
 


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback