Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1388258.1
Update Date:2018-01-08
Keywords:

Solution Type  Technical Instruction Sure

Solution  1388258.1 :   Oracle ZFS Storage Appliance: How to replicate an existing project and change tunables on the target from the start.  


Related Items
  • Sun ZFS Storage 7420
  •  
  • Oracle ZFS Storage ZS3-2
  •  
  • Sun Storage 7110 Unified Storage System
  •  
  • Oracle ZFS Storage ZS4-4
  •  
  • Sun Storage 7210 Unified Storage System
  •  
  • Sun Storage 7410 Unified Storage System
  •  
  • Oracle ZFS Storage ZS3-4
  •  
  • Sun Storage 7310 Unified Storage System
  •  
  • Sun ZFS Storage 7120
  •  
  • Oracle ZFS Storage Appliance Racked System ZS4-4
  •  
  • Sun ZFS Storage 7320
  •  
  • Oracle ZFS Storage ZS3-BA
  •  
  • Sun Storage 7720 Unified Storage System
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
  •  
  • _Old GCS Categories>Sun Microsystems>Storage - Disk>Unified Storage
  •  




In this Document
Goal
Solution


Created from <SR 3-5055906001>

Applies to:

Oracle ZFS Storage Appliance Racked System ZS4-4 - Version All Versions and later
Sun Storage 7410 Unified Storage System - Version All Versions and later
Oracle ZFS Storage ZS3-2 - Version All Versions and later
Oracle ZFS Storage ZS3-4 - Version All Versions and later
Oracle ZFS Storage ZS3-BA - Version All Versions and later
7000 Appliance OS (Fishworks)

Goal

When replicating large projects customers may want to store the projects as compressed data on the target system, or with a different blocksize/recordsize etc.

The problem is that until the first initial replication is completed, there is no way to change any tunables on target system.

We want to get the project replicated across, but not the data in it, so we can change the tunables on the target before sending the data.

This document will provide an example of how this is done.

 

Solution

On the source system, there is a project named mydata.

This project holds 3 shares named: hidden, corefiles and temporary

To replicate this project to another system, but with all this data compressed on the target system. There are not very much changes happening in the project so it would be beneficial to get the data compressed from the start of the replication.

Here is the procedure to do this:

 

1.  Log in to the source system using SSH

 

2.  Run :

appliance:> shares select mydata

 

3.  Run:

appliance: shares mydata> ls

Properties:
                   aclinherit = restricted
                        atime = true
                     checksum = fletcher4
                  compression = off
        <listing-details removed for visibility>

Shares:

Filesystems:

NAME          SIZE       MOUNTPOINT
corefiles     3.34G      /export/corefiles
hidden        364M       /export/hidden
temporary     120M       /export/temporary

Children:
                     groups => View per-group usage and manage group
                     quotas
                     replication => Manage remote replication
                     snapshots => Manage snapshots
                     users => View per-user usage and manage user quotas

 

4.  For each listed share do: select <sharename> replications set inherited=false

appliance:shares mydata> select corefiles replications set inherited=false

Uninheriting the replication configuration from the project means that the
share will no longer be replicated with the parent project and other shares
inheriting the project's configuration, but rather it will be replicated
separately. The replication configuration of the project will no longer apply
to this share; you will need to create new replication actions in order to
replicate it to other appliances. All existing replicas of this share on all
targets will be unaffected by this operation. Such replicas will NOT be used
for subsequent incremental updates. The share will need to be replicated with a
full update for each new action that is created.


Are you sure? (Y/N)

 

5.  Answer Y on the question.

 

6.  Now repeat this for all shares in the project, when this is done for all shares, run:

appliance:shares mydata> cd /

 

7.  Run: 

appliance:> shares select mydata replication action
appliance:shares mydata action (uncommitted)> set tar get= replication-trial    (use tab after = to see available targets)
                        target = replication-trial (uncommitted)
appliance:shares mydata action (uncommitted)> set <tab>
  continuous     include_snaps  pool           use_ssl       
  enabled        max_bandwidth  target        
appliance:shares mydata action (uncommitted)> set pool=pool-0                   (use tab after = to see the available poolnames on the target system)
                          pool = pool-0 (uncommitted)
appliance:shares mydata action (uncommitted)> commit

 

8.  Now the replication setup is saved for the project, but we do not have any scheduled replications yet.

 

9.  Run:

appliance:> shares select mydata replication ls
Actions:
            TARGET          STATUS     NEXT
action-000  replication-trial idle       manual

 

10.  This lists all available replication in my case I have only one named action-000.   So, I select my action-000 and do an initial send by running:

appliance:> shares select mydata replication select action-000
appliance:shares mydata action-000> sendupdate

 

11.  Now you can verify progress by either issuing the command:

appliance:shares mydata action-000> ls
Properties:
                            id = 2e70ca43-d7d0-c6f1-d2bc-8c41ee8ce481
                        target = replication-trial
                       enabled = true
                    continuous = false
                 include_snaps = true
                 max_bandwidth = unlimited
                       use_ssl = true
                         state = sending
             state_description = Sending update
                     last_sync = <unknown>
                      last_try = <unknown>
                   next_update = manual
Once the sending has completed the output will indicate this in the state and state_description fields, as well as filling in the last_sync and last_try fields.
appliance:shares mydata action-000> ls
Properties:
                            id = 2e70ca43-d7d0-c6f1-d2bc-8c41ee8ce481
                        target = replication-trial
                       enabled = true
                    continuous = false
                 include_snaps = true
                 max_bandwidth = unlimited
                       use_ssl = true
                         state = idle
             state_description = Idle (no update pending)
                     last_sync = Tue Dec 27 2011 15:13:56 GMT+0000 (UTC)
                      last_try = Tue Dec 27 2011 15:13:56 GMT+0000 (UTC)
                   next_update = manual

 

12.  Now log in to the CLI of the target-system:

Run:

    target-system:> shares replication sources

 

Run:

    target-system:shares replication sources> ls
    Sources:

    source-000 updown
                PROJECT                        STATE           LAST UPDATE
    package-000 mydata                         idle            Tue Dec 27 2011 15:13:56 GMT+0000 (UTC)

 

Select the desired source system and then the desired package:

    target-system:shares replication sources> select source-000
    target-system:shares replication source-000> ls
    Properties:
                              name = appliance
                        ip_address = 192.168.56.40:216
                               asn = 83ce3207-4fbb-e4ba-f558-f7f92ff68cec

    Packages:

                PROJECT                        STATE           LAST UPDATE
    package-000 mydata                         idle            Tue Dec 27 2011 15:13:56 GMT+0000 (UTC)

    target-system:shares replication source-000> select package-000
    target-system:shares replication source-000 package-000> ls
    Properties:
                                id = 2e70ca43-d7d0-c6f1-d2bc-8c41ee8ce481
                           enabled = true
                             state = idle
                 state_description = Idle (no update in progress)
                         last_sync = Tue Dec 27 2011 15:13:56 GMT+0000 (UTC)
                          last_try = Tue Dec 27 2011 15:13:56 GMT+0000 (UTC)

    Projects:
                           mydata

 

Now select the desired project:

    target-system:shares replication source-000 package-000> select mydata

 

Now we're ready to set the compression:

    target-system:shares replication source-000 package-000 mydata> set compression=<TAB>
    gzip    gzip-2  gzip-9  lzjb    off     
    target-system:shares replication source-000 package-000 mydata> set compression=lzjb
                       compression = lzjb (uncommitted)
    target-system:shares replication source-000 package-000 mydata> commit

As you can see I chose lzjb for this replication, this is the least cpu-consuming algorithm, while gzip-9 is the most cpu-consuming algorithm.

 

13.  Now it's time to go back to the source system to make sure the actual share data will be sent.

 

14.  Log in on the source-system, or if you are still logged run:

  1. appliance:shares mydata action-000> cd /

 

15.  Now select the source project and set the replication inheritance flag to true for all shares:

  1. appliance:> shares select mydata
    appliance:shares mydata> ls
    Properties:
                        aclinherit = restricted
                             atime = true
                          checksum = fletcher4
                       compression = off
    <listing removed for better visibility>

    Shares:


    Filesystems:

    NAME             SIZE    MOUNTPOINT
    corefiles        3.34G   /export/corefiles
    hidden           364M    /export/hidden
    temporary        120M    /export/temporary

    Children:
                               groups => View per-group usage and manage group
                                         quotas
                          replication => Manage remote replication
                            snapshots => Manage snapshots
                                users => View per-user usage and manage user quotas

    appliance:shares mydata> select corefiles replication set inherited=true
    Inheriting the replication configuration from the project means that the share
    will no longer be replicated on its own, but rather it will be replicated with
    the project and all other shares inheriting this project's configuration. All
    replication configuration previously associated with this share will be
    destroyed. All existing replicas of this share on all targets will be
    unaffected by this operation. Such replicas will NOT be used for future
    replication updates. The share will be replicated with a full update for each
    of the project's replication actions.

    Are you sure? (Y/N)                                (Press Y)
                         inherited = true
    Once the last select command has been run for all shares in this project all data will be transferred during the next replication.

 

 

Now it's time to create a schedule for the mydata-project so it will be replicated regularly.

 

1.  On the source system check which replication packages you have.

  1. appliance:shares mydata> cd /
    appliance:> shares select mydata replication ls
     Actions:

                TARGET          STATUS     NEXT
    action-000  replication-trial idle       manual

 

2.  Select action-000 in this case:

  1. appliance:> shares select mydata replication select action-000
    appliance:shares mydata action-000>

 

3.  Run the command schedule to start creating a replication schedule: I will here create a schedule which will replicate the project hourly at 7 minutes past the hour.

  1. appliance:shares mydata action-000> schedule
    appliance:shares mydata action-000 schedule (uncommitted)> get
                         frequency = (unset)
                               day = (unset)
                              hour = (unset)
                            minute = (unset)
    appliance:shares mydata action-000 schedule (uncommitted)> set frequency=<tab>
    day       halfhour  hour      month     week      
    appliance:shares mydata action-000 schedule (uncommitted)> set frequency=hour
                         frequency = hour (uncommitted)
    appliance:shares mydata action-000 schedule (uncommitted)> set
    frequency  minute    
    appliance:shares mydata action-000 schedule (uncommitted)> set minute=07 
                            minute = 07 (uncommitted)
    appliance:shares mydata action-000 schedule (uncommitted)> commit
    Depending on the frequency chosen various variables needs to be set, for instance if you chose week or month, you'd have to pick a day for it to run before you can commit the schedule.

 

4.  To check the newly created schedule run:

  1. appliance:shares mydata action-000> ls
    Properties:
                                id = 2e70ca43-d7d0-c6f1-d2bc-8c41ee8ce481
                            target = replication-trial
                           enabled = true
                        continuous = false
                     include_snaps = true
                     max_bandwidth = unlimited
                           use_ssl = true
                             state = idle
                 state_description = Idle (no update pending)
                         last_sync = Tue Dec 27 2011 15:13:56 GMT+0000 (UTC)
                          last_try = Tue Dec 27 2011 15:13:56 GMT+0000 (UTC)
                       next_update = Tue Dec 27 2011 16:07:00 GMT+0000 (UTC)

    Schedules:


    NAME                 FREQUENCY            DAY                  HH:MM
    schedule-000         hour                 -                     -:07

 

5.  Now the schedule has been created to replicate the whole project mydata at 7 minutes past the hour each hour. If the initial data-run does not complete within an hour, the consecutive hourly runs will fail, but this is normal until the initial data-replication has completed. Once completed the hourly schedule will resume

 

This can be a long process if the project which needs to be replicated has lots of shares, it will however get the data to the target system compressed as we initially intended in this example.
Some other characteristics can also be set along with the compression in the same manner. Changing the checksum algorithm can be done this way, recordsize and blocksize among several other options.


Note:
  • You can limit the bandwidth used for replication to minimize impact on the more important data traffic if replication and data traffic share the same wire.
  • Disabling SSL might provide some bits of performance increase

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback