Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-2071496.1
Update Date:2017-06-12
Keywords:

Solution Type  Technical Instruction Sure

Solution  2071496.1 :   Oracle ZFS Storage Appliance: How to Replicate Pools Between Two Nodes of a Cluster via a Direct Connection of Network Interfaces.  


Related Items
  • Sun ZFS Storage 7420
  •  
  • Oracle ZFS Storage ZS5-2
  •  
  • Oracle ZFS Storage ZS3-2
  •  
  • Sun Storage 7110 Unified Storage System
  •  
  • Sun Storage 7210 Unified Storage System
  •  
  • Oracle ZFS Storage ZS4-4
  •  
  • Oracle ZFS Storage ZS5-4
  •  
  • Sun Storage 7410 Unified Storage System
  •  
  • Sun ZFS Storage 7120
  •  
  • Sun Storage 7310 Unified Storage System
  •  
  • Oracle ZFS Storage ZS3-4
  •  
  • Oracle ZFS Storage Appliance Racked System ZS4-4
  •  
  • Sun ZFS Storage 7320
  •  
  • Oracle ZFS Storage ZS3-BA
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
  •  




In this Document
Goal
Solution


Created from <SR 3-11424870811>

Applies to:

Sun Storage 7110 Unified Storage System - Version All Versions and later
Sun Storage 7210 Unified Storage System - Version All Versions and later
Sun ZFS Storage 7120 - Version All Versions and later
Sun Storage 7310 Unified Storage System - Version All Versions and later
Oracle ZFS Storage ZS3-4 - Version All Versions and later
7000 Appliance OS (Fishworks)

Goal

 This document provides a working example of a replication between two ZFS systems. Two factors make this replication very unique. 

  • The data is being replicated between the two nodes of a single ZFS appliance. (Source and Target are on one clustered system) 
  • There is no network involved. The data is replicated through a direct connection between an interface on each node.

This type of replication removes the additional infrastructure required of traditional replication. That being, you do not need a second ZFS appliance and a network connection between the two.

Another useful application of this feature is data migration. For example, after you add more capacity (disk trays) to an appliance, you can minimize downtime by migrating the data via replication to the new disk trays. 

 

Solution

We start by creating a virtual NIC (VNIC) on the ixgbe1 port of each node.

Virtual NICs (2013 code only) are used to bypass the cluster locking mechanism on interfaces in a cluster. This will allow you to use the same interface on each node.

In this example, ixgbe1 is used from ZFS-node-1 and ZFS-node-2:

ZFS-node-1:> configuration net devices show
  DEVICE UP  SPEED         MAC
  ixgbe1  true 10000 Mbit/s 0:1b:21:7e:c3:29

ZFS-node-2:> configuration net devices show
DEVICE UP  SPEED          MAC
ixgbe1  true 10000 Mbit/s 0:1b:21:7e:c2:91

ZFS-node-1:> configuration net datalinks device
ZFS-node-1:configuration net datalinks device (uncommitted)>
ZFS-node-1:configuration net datalinks device (uncommitted)> set label=ixgbe1-cluster-dl
ZFS-node-1:configuration net datalinks device (uncommitted)> set links=ixgbe1
ZFS-node-1:configuration net datalinks device (uncommitted)> commit

ZFS-node-1:> configuration net datalinks vnic
ZFS-node-1:configuration net datalinks vnic (uncommitted)> set label=ixgbe1-node1-vnic
ZFS-node-1:configuration net datalinks vnic (uncommitted)> set links=ixgbe1
ZFS-node-1:configuration net datalinks vnic (uncommitted)> commit 

ZFS-node-2:> configuration net datalinks vnic
ZFS-node-2:configuration net datalinks vnic (uncommitted)> set label=ixgbe1-node2-vnic
ZFS-node-2:configuration net datalinks vnic (uncommitted)> set links=ixgbe1
ZFS-node-2:configuration net datalinks vnic (uncommitted)> commit

 

An interface is created on each VNIC. Connectivity is verified via ping and traceroute.

Lastly, they are both private (locked) to the node they reside on:

ZFS-node-1:> configuration net interfaces ip
ZFS-node-1:configuration net interfaces ip (uncommitted)> set label=Repl-nic-node1
ZFS-node-1:configuration net interfaces ip (uncommitted)> set links=vnic1
ZFS-node-1:configuration net interfaces ip (uncommitted)> set v4addrs=192.168.10.10/24
ZFS-node-1:configuration net interfaces ip (uncommitted)> commit

ZFS-node-2:> configuration net interfaces ip
ZFS-node-2:configuration net interfaces ip (uncommitted)> set label=Repl-nic-node2
ZFS-node-2:configuration net interfaces ip (uncommitted)> set links=vnic2
ZFS-node-2:configuration net interfaces ip (uncommitted)> set v4addrs=192.168.10.11/24
ZFS-node-2:configuration net interfaces ip (uncommitted)> commit 

ZFS-node-1:> ping 192.168.10.11
     192.168.10.11 is alive
ZFS-node-1:> traceroute 192.168.10.11
     traceroute: Warning: Multiple interfaces found; using 192.168.10.10 @ vnic1
     traceroute to 192.168.10.11 (192.168.10.11), 30 hops max, 40 byte packets
     1 192.168.10.11 (192.168.10.11) 0.142 ms 0.058 ms 0.060 ms

ZFS-node-2:> ping 192.168.10.10
     192.168.10.10 is alive
ZFS-node-2:> traceroute 192.168.10.10
     traceroute: Warning: Multiple interfaces found; using 192.168.10.11 @ vnic2
     traceroute to 192.168.10.10 (192.168.10.10), 30 hops max, 40 byte packets
     1 192.168.10.10 (192.168.10.10) 0.136 ms 0.057 ms 0.059 ms

ZFS-node-1:> configuration cluster resources select net/vnic1 set type=private
ZFS-node-2:> configuration cluster resources select net/vnic2 set type=private

 

An initial replication is set up from ZFS-node-1 (pool-a) to ZFS-node-2 (pool-b):

ZFS-node-1-1:> configuration services replication targets target
ZFS-node-1-1:configuration services replication target (uncommitted)> set hostname=192.168.10.11
ZFS-node-1-1:configuration services replication target (uncommitted)> set label=Repl-nic-node2
ZFS-node-1-1:configuration services replication target (uncommitted)> set root_password=*********
ZFS-node-1-1:configuration services replication target (uncommitted)> commit

ZFS-node-1-1:> shares select Dbase replication action
ZFS-node-1-1:shares Dbase action (uncommitted)> set target=Repl-nic-node2
ZFS-node-1-1:shares Dbase action (uncommitted)> set continuous=true
ZFS-node-1-1:shares Dbase action (uncommitted)> set pool=pool-b
ZFS-node-1-1:shares Dbase action (uncommitted)> set use_ssl=false
ZFS-node-1-1:shares Dbase action (uncommitted)> commit
ZFS-node-1-1:> shares select Dbase replication select action-000 sendupdate
ZFS-node-1-1:shares Dbase action-000> show (use show to monitor the update until completion) 

 

Once the initial replication completes, the shares and their replicated data can be managed via all the utilities and features of a traditional replication.

Typical uses include disaster recovery, backups and data migration.

This document will continue on with a working example of data migration. i.e. We will bring the newly migrated data online for all client systems. The data contained in this replication are shares mounted over NFS. 

Document the client mount points.  Start a maintenance window and unmount the filesystem.

x4170-client# df -k /export/database
Filesystem 1024-blocks Used Available Capacity Mounted on
10.152.224.70:/export/database
12452363236 5264131 12447099105 1% /export/database

x4170-client# ls -lrth
total 10528202
     -rw-r--r-- 1 oracle oracle 10M Nov 26 2015 redo.log
     -rw-r--r-- 1 oracle oracle 10M Nov 26 2015 redo.log.1
     -rw-r--r-- 1 oracle oracle 4.0G Nov 26 2015 db.2
     -rw-r--r-- 1 oracle oracle 1.0G Nov 26 2015 db.1

x4170-client# umount /export/database
x4170-client#

 

Disable replication and change the mount point of the existing active share - so that the old mount point name may be re-used.

ZFS-node-1: shares Dbase replication> select action-XXX set enabled=false
enabled = false

ZFS-node-1:shares Dbase> set mountpoint=/export/database-old
ZFS-node-1:shares Dbase> commit

 

Sever the replication package. This will promote the replicated package to an accessible project.

ZFS-node-2:> shares replication sources select source-000 select package-000
ZFS-node-2:shares replication source-000 package-000> sever
     This action will permanently sever this package and its replicated shares from
     the source system, making them local projects on this system. Subsequent
     replication updates in either direction will require defining new actions and
     sending a full update.

     Are you sure? (Y/N)
ZFS-node-2:shares replication sources>

 

Verify your new project and shares are there.

ZFS-node-2:> shares select Dbase show
     Properties:
     .......................
     Filesystems:
     NAME SIZE ENCRYPTED MOUNTPOINT
     database 5.02G off /export/database

 

Rename the old project and reboot the node to move the pool over to the same side of the cluster as the interface used to mount.

ZFS-node-1:shares> rename Dbase Dbase-old
ZFS-node-2:> Maintenance System Reboot now

 

On the client, remount the share with the same mount command used before. 

x4170-client# mount 10.152.224.70://export/database /export/database
x4170-client# ls -lrth /export/database
total 10529112
     -rw-r--r-- 1 oracle oracle 10M Nov 26 2015 redo.log
     -rw-r--r-- 1 oracle oracle 10M Nov 26 2015 redo.log.1
     -rw-r--r-- 1 oracle oracle 4.0G Nov 26 2015 db.2
     -rw-r--r-- 1 oracle oracle 1.0G Nov 26 2015 db.1

 

 

Once data and access is verified, the old projects may be destroyed and the pools/interfaces moved to a preferred cluster node.

 


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback