Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-2126047.1
Update Date:2017-07-25
Keywords:

Solution Type  Technical Instruction Sure

Solution  2126047.1 :   Post Install - Replication Network Configuration for ZDLRA  


Related Items
  • Zero Data Loss Recovery Appliance X5 Hardware
  •  
Related Categories
  • PLA-Support>Sun Systems>x86>Engineered Systems HW>SN-x86: ZDLRA
  •  




In this Document
Goal
Solution
 Configuring the Replication Network for the ZDLRA
 Step 1 - Add the Network Interfaces to the compute node ifcfg files
 Step 2 - Edit the /etc/hosts file to include replication interfaces - (All Compute Nodes)
 Step 3 - Append /etc/sysctl.conf with the entries for bondeth1 interface - (All Compute Nodes)
 Step 4 - Edit /etc/modprobe.d/exadata.conf to allow bonding on bondeth1 - (All Compute Nodes)
 Step 5 - Restart the ZDLRA - Next on the list
 Step 6 - Verify bondeth1 is correct
 Step 7 - Confirm that the rule and route files for bondeth1 look like the cat output below (All Compute Nodes)
 Step 8 - Verify connectivity to replication interfaces (All Compute Nodes)
 Step 9 - Verify that the traceoute output for the following two lines are the same
 Step 10 - Verify that the traceoute output for the following two lines are the same
 Step 11 - As ORACLE user, Verify that bondeth1 interface is listed 
 Step 12 - As ROOT user, Verify that network #2 is NOT being used.
 Step 13 - Add the network #2 to the replication network
 Step 14 - Verify the new network is configured correctly
 Step 15 - Add the replication VIPS and SCANS to the new CRS network
 Step 16 - Start the replication VIPS on the new CRS network
 Step 17 - Verify that the VIPs are running
 Step 18 - As ORACLE user, Add and start the database listener and SCAN Listener
 Step 19 - Verify the interfaces can be pinged
 Configuring the Replication Network for Replication
 Step 1 - Verify the hosts table in the Recovery Appliance
 Step 2 - Update the replication_ip_address
 Configuring the Replication Network as Ingest
 Step 1 - As RASYS, Update the BACKUP_IP_ADDRESS column
 Step 2 - Update the backup_ip_address value
 Step 3 - Save your changes to the table
 Step 4 - Query the values from rai_host verify settings were updated and correct
 Step 5 - Verify the current DISPATCHER parameter settings AS SYSDBA on each node.
 Step 6 - As SYSDBA, Update the DISPATCHER parameter
 Step 7 - Verify DISPATCHER parameter settings have been updated and correct.
 Step 8 - Setting up backup over different Ingest Networks
 Step 9 - Test Backup of client DB


Applies to:

Zero Data Loss Recovery Appliance X5 Hardware - Version All Versions to All Versions [Release All Releases]
Linux x86-64

Goal

 Add replication network to an existing Zero Data Loss Recovery Appliance.

Solution

Configuring the Replication Network for the ZDLRA

This document focuses on use of the supplied replication network interface.
If you are utilizing a tagged VLAN interface to be configured as ingest please refer to note 2047411.1.
  • https://mosemp.us.oracle.com/epmos/faces/DocumentDisplay?id=2047411.1

 

Step 1 - Add the Network Interfaces to the compute node ifcfg files

On each host you will edit the following files under /etc/sysconfig/network-scripts

a) ifcfg-eth2
b) ifcfg-eth3
c) ifcfg-bondeth1
d) route-bondeth1

Files (a) and (b) will exist and should look like the following - noting that the DEVICE string is different for ifcfg-eth3

#### DO NOT REMOVE THESE LINES ####
#### %GENERATED BY CELL% ####
DEVICE=eth2
BOOTPROTO=none
ONBOOT=no
HOTPLUG=no
IPV6INIT=no

Files (c) and (d) will NOT exist and will need to be created

First edit files (a) and (b) on each host such that they become slave interfaces to the bondeth1 interface that we will create in the next step.

The files should ultimately look like

#### DO NOT REMOVE THESE LINES ####
#### %GENERATED BY CELL% ####
DEVICE=eth2
USERCTL=no
ONBOOT=yes
MASTER=bondeth1
SLAVE=yes
BOOTPROTO=none
HOTPLUG=no
IPV6INIT=no

 

Note that USERCTL, MASTER, SLAVE are new parameters and that ONBOOT is changed from NO to YES

Repeat this for file (b) ifcfg-eth3 and also repeat for all the compute nodes

Next we need to create files (c) ifcfg-bondeth1 on the first host. The file needs to look like

#### DO NOT REMOVE THESE LINES ####
#### %GENERATED BY CELL% ####
DEVICE=bondeth1
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=4 miimon=100 downdelay=200 updelay=200 num_grat_arp=100 lacp_rate=1 xmit_hash_policy=layer3+4"
IPV6INIT=no
IPADDR=192.133.241.196
NETMASK=255.255.255.192
NETWORK=192.133.241.192
BROADCAST=192.133.241.255
GATEWAY=192.133.241.193

 

Note the use of BONDING_OPTS above which are compatible with LACP configured bonding, as per the Installation Template.
If this is wrong and Active/Passive bonding should be used, then substitute the following line

BONDING_OPTS="mode=active-backup miimon=100 downdelay=5000 updelay=5000 num_grat_arp=100"

Next still on the first host, we need to create file (d) route-bondeth1. Before we do this, we need to verify that ip table #211 is not in use

 

# grep table route-*

route-bondeth0: x.x.x.x/y dev bondeth0 table 210
route-bondeth0:default via a.a.a.1 dev bondeth0 table 210
route-eth0:c.c.c.0/z dev eth0 table 220
route-eth0:default via c.c.c.1 dev eth0 table 220
route-ib0:d.d.d.d/t dev ib0 table 180
route-ib1:d.d.d.d/t dev ib1 table 181

 

Note that there should be 6 lines reported for your configuration - if there is a difference, then please check back with Development.
You will also notice that tables 180, 181, 210 and 220 are currently in use, and table 211 is NOT in use

We can now create file (d) route-bondeth1 to look like

On Node1:

[root@zdlra1adm01 network-scripts]# more route-bondeth1
192.133.241.192/26 dev bondeth1 table 211
default via 192.133.241.193 dev bondeth1 table 211
[root@zdlra1adm01 network-scripts]#

We now need to repeat the creation of file (c) and (d) on the second host.
Note that the ONLY difference between the two files is the IPADDR entry in file (c) ifcfg-bondeth1

Step 2 - Edit the /etc/hosts file to include replication interfaces - (All Compute Nodes)

- Use compute node names

- Add the replication interface to /etc/hosts on first node
192.133.241.196 zdlra1repl01.domain.com zdlra1repl01

- Add the replication interface to /etc/hosts on second node
192.133.241.197 zdlra1repl02.domain.com zdlra1repl02

Step 3 - Append /etc/sysctl.conf with the entries for bondeth1 interface - (All Compute Nodes)

net.ipv6.conf.bondeth1.accept_ra = 0
net.ipv4.conf.bondeth1.rp_filter = 1

Step 4 - Edit /etc/modprobe.d/exadata.conf to allow bonding on bondeth1 - (All Compute Nodes)

alias bondeth1 bonding

Step 5 - Restart the ZDLRA - Next on the list

Stop the Recovery Appliance

  • Refer to the Recovery Appliance Owners Guide, Chapter 12 for procedure to Shutting Down Recovery Appliance and Starting Up Recovery Appliance.

Reboot the two Compute Servers

Restart the Recovery Appliance

  • Refer to the Recovery Appliance Owners Guide, Chapter 12 for procedure to Shutting Down Recovery Appliance and Starting Up Recovery Appliance.

Step 6 - Verify bondeth1 is correct

Run these tests:

# netstat -rna
# ifconfig bondeth1
# cat /proc/net/bonding/bondeth1

 

As ORACLE user, Run a network_throughput_test between the two Compute Nodes (Replace sample names with actual names)


Create a file called /home/oracle/sending listing the Admin host for compute node #1 -

on node #1 and verify trusted ssh is setup

$ echo zdlra1adm01 > /home/oracle/sending
$ dcli -g /home/oracle/sending -l oracle date

Create a file called /home/oracle/receiving listing the admin host for compute node #2 -

on node #1 and verify trusted ssh is setup

$ echo zdlra1adm02 > /home/oracle/receiving
$ dcli -g /home/oracle/receiving -l oracle date

Run the network throughput test for bondeth1

$ /opt/oracle.RecoveryAppliance/client/network_throughput_test.sh -s /home/oracle/sending -r /home/oracle/receiving -i bondeth1

Expected output:
Using 'zdlra1adm01' hosts for sending nodes
Using 'zdlra1adm02’ hosts for receiving nodes
Validating Trusted SSH to sending nodes zdlra1adm01 ... OK
Validating Trusted SSH to receiving nodes zdlra1adm02 ... OK
Total Network Bandwidth 1,782,321,425 bytes/sec

The network bandwidth will probably not reach 2GBytes/sec - but for an active/active interface correctly configured, should be more than 1GByte/sec

Step 7 - Confirm that the rule and route files for bondeth1 look like the cat output below (All Compute Nodes)

# cd /etc/sysconfig/network-scripts

# cat rule-bondeth1
from 192.133.241.196 table 211
to 192.133.241.196 table 211


# cat route-bondeth1
192.133.241.192/26 dev bondeth1 table 211
default via 192.133.241.193 dev bondeth1 table 211

Step 8 - Verify connectivity to replication interfaces (All Compute Nodes)

# ping -c 3 zdlra1repl01.domain.com
# ping -c 3 zdlra1repl02.domain.com

Step 9 - Verify that the traceoute output for the following two lines are the same

From Node 1:

# traceroute -i bondeth1 zdlra1repl02.domain.com

Step 10 - Verify that the traceoute output for the following two lines are the same

From Node 2:

# traceroute -i bondeth1 zdlra1repl01.domain.com

 

Note:
At this point - we have verified the configuration of the O/S to use the replication interface.
We can move on to configuring the VIP's and SCAN's that will be used by the Recovery Appliance software

Step 11 - As ORACLE user, Verify that bondeth1 interface is listed 

Ensure they are listed with he correct IP addresses and subnet masks.

$ /u01/app/12.1.0.2/grid/bin/oifcfg iflist -p -n

Expected output:
eth0 192.133.40.0 PRIVATE 255.255.248.0
ib0 192.168.40.0 PRIVATE 255.255.248.0
ib0 169.254.0.0 UNKNOWN 255.255.128.0
ib1 192.168.40.0 PRIVATE 255.255.248.0
ib1 169.254.128.0 UNKNOWN 255.255.128.0
bondeth0 192.133.62.0 PRIVATE 255.255.255.0
bondeth1 192.133.241.192 PRIVATE 255.255.255.192

Step 12 - As ROOT user, Verify that network #2 is NOT being used.

Note: (The command should fail with the error below)

# /u01/app/12.1.0.2/grid/bin/srvctl config network -k 2

Expected output:
PRCR-1001 : Resource ora.net2.network does not exist

Step 13 - Add the network #2 to the replication network

# /u01/app/12.1.0.2/grid/bin/srvctl add network -k 2 -S 192.133.241.192/255.255.255.192/bondeth1

Step 14 - Verify the new network is configured correctly

# /u01/app/12.1.0.2/grid/bin/srvctl config network -k 2

Expected output:
Network 2 exists
Subnet IPv4: 192.133.241.192/255.255.255.192/bondeth1, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:

Step 15 - Add the replication VIPS and SCANS to the new CRS network

# /u01/app/12.1.0.2/grid/bin/srvctl add vip -n zdlra1adm01 -A zdlra1repl01-vip.domain.com/255.255.255.192/bondeth1 -k 2

# /u01/app/12.1.0.2/grid/bin/srvctl add vip -n zdlra1adm02 -A zdlra1repl02-vip.domain.com/255.255.255.192/bondeth1 -k 2

# /u01/app/12.1.0.2/grid/bin/srvctl add scan -netnum 2 -scanname zdlra1repl-scan.domain.com

Step 16 - Start the replication VIPS on the new CRS network

# /u01/app/12.1.0.2/grid/bin/srvctl start vip -i zdlra1repl01-vip.domain.com

# /u01/app/12.1.0.2/grid/bin/srvctl start vip -i zdlra1repl02-vip.domain.com

Step 17 - Verify that the VIPs are running

$ srvctl status vip -vip zdlra1repl01-vip.domain.com
VIP zdlra1repl01-vip.domain.com is enabled
VIP zdlra1repl01-vip.domain.com is running on node: zdlra1adm01


$ srvctl status vip -vip zdlra1repl02-vip.domain.com
VIP zdlra1repl02-vip.domain.com is enabled
VIP zdlra1repl02-vip.domain.com is running on node: zdlra1adm02

 

Step 18 - As ORACLE user, Add and start the database listener and SCAN Listener

$ /u01/app/12.1.0.2/grid/bin/srvctl add listener -l LISTENER_REPL -p 1522 -k 2

$ /u01/app/12.1.0.2/grid/bin/srvctl start listener -l LISTENER_REPL

$ /u01/app/12.1.0.2/grid/bin/srvctl status listener -l LISTENER_REPL

Expected output:
Listener LISTENER_REPL is enabled
Listener LISTENER_REPL is running on node(s): zdlra1adm01,zdlra1adm02

$ srvctl add scan_listener -netnum 2 -listener LISTENER_REPL -endpoints TCP:1522

$ srvctl start scan_listener -netnum 2

$ srvctl status scan_listener -netnum 2

Expected output:
SCAN Listener LISTENER_REPL_SCAN1_NET2 is enabled
SCAN listener LISTENER_REPL_SCAN1_NET2 is running on node zdlra1adm02
SCAN Listener LISTENER_REPL_SCAN2_NET2 is enabled
SCAN listener LISTENER_REPL_SCAN2_NET2 is running on node zdlra1adm01
SCAN Listener LISTENER_REPL_SCAN3_NET2 is enabled
SCAN listener LISTENER_REPL_SCAN3_NET2 is running on node zdlra1adm02

As SYSDBA Alter the listener_networks to include new zdlra1repl-scan1 for network 2 for each Recovery Appliance node.

On Node1:

ALTER SYSTEM SET listener_networks='((NAME=net1) (LOCAL_LISTENER= (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.133.62.69)(PORT=1521)))))','((NAME=net1)(REMOTE_LISTENER=zdlra1ingest-scan1.us.oracle.com:1521))', '((NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.133.241.202)(PORT=1522)))))','((NAME=net2)(REMOTE_LISTENER=zdlra1repl-scan1.us.oracle.com:1522))' SCOPE=SPFILE SID='zdlra1';

 

On Node2:

ALTER SYSTEM SET listener_networks= '((NAME=net1)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.133.62.70)(PORT=1521)))))','((NAME=net1)(REMOTE_LISTENER=zdlra1ingest-scan1.us.oracle.com:1521))','((NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.133.241.203)(PORT=1522)))))','((NAME=net2)(REMOTE_LISTENER=zdlra1repl-scan1.us.oracle.com:1522))' SCOPE=SPFILE SID='zdlra2';

 

Step 19 - Verify the interfaces can be pinged

$ ping -c 3 zdlra1repl01-vip
$ ping -c 3 zdlra1repl02-vip
$ ping -c 3 zdlra1repl-scan1.domain.com

Configuring the Replication Network for Replication

Step 1 - Verify the hosts table in the Recovery Appliance

Login to SQL*Plus as the rasys schema owner

$ sqlplus rasys/ra

SQL> select node_name,replication_ip_address from host;

NODE_NAME
--------------------------------------------------------------------------------
REPLICATION_IP_ADDRESS
--------------------------------------------------------------------------------
zdlra1adm02.domain.com
zdlra1adm01.domain.com
SQL>

 

Step 2 - Update the replication_ip_address

SQL> update HOST set REPLICATION_IP_ADDRESS='192.133.241.196' where NODE_NAME = ‘zdlra1adm01.domain.com';

1 row updated.

SQL> update HOST set REPLICATION_IP_ADDRESS=’192.133.241.197' where NODE_NAME = ‘zdlra1adm02.domain.com';

1 row updated.

SQL> commit;
Commit complete.

 

Configuring the Replication Network as Ingest

Step 1 - As RASYS, Update the BACKUP_IP_ADDRESS column

Query current settings from rai_host:
SQL> SELECT * FROM rai_host;
NODE_NAME
----------------------------------
zdlra1adm01.us.oracle.com
ADMIN_IP_ADDRESS
----------------------------------
10.133.40.121
BACKUP_IP_ADDRESS
----------------------------------
10.133.62.69
REPLICATION_IP_ADDRESS
----------------------------------
NODE_NAME
----------------------------------
zdlra1adm02.us.oracle.com
ADMIN_IP_ADDRESS
----------------------------------
10.133.40.122
BACKUP_IP_ADDRESS
----------------------------------
10.133.62.70
REPLICATION_IP_ADDRESS
----------------------------------

 

Step 2 - Update the backup_ip_address value

SQL> UPDATE rai_host SET backup_ip_address='10.133.241.202,'|| backup_ip_address WHERE node_name = 'zdlra1adm01.us.oracle.com';
SQL> UPDATE rai_host SET backup_ip_address='10.133.241.203,'|| backup_ip_address WHERE node_name = 'zdlra1adm02.us.oracle.com';

Step 3 - Save your changes to the table

SQL> commit;

Step 4 - Query the values from rai_host verify settings were updated and correct

SQL> SELECT * FROM rai_host;
NODE_NAME
----------------------------------
zdlra1adm01.us.oracle.com
ADMIN_IP_ADDRESS
----------------------------------
10.133.40.121
BACKUP_IP_ADDRESS
----------------------------------
10.133.241.203, 10.133.62.69
REPLICATION_IP_ADDRESS
----------------------------------
NODE_NAME
----------------------------------
zdlra1adm02.us.oracle.com
ADMIN_IP_ADDRESS
----------------------------------
10.133.40.122
BACKUP_IP_ADDRESS
----------------------------------
10.133.241.203, 10.133.62.70
REPLICATION_IP_ADDRESS
----------------------------------

Step 5 - Verify the current DISPATCHER parameter settings AS SYSDBA on each node.

SQL> show parameter dispatcher;
NAME TYPE VALUE
------------------------- ----------- ------------------------------
dispatchers string (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=zdlra1ingest01-vip.us.oracle.com))(SDU=65536))(SERVICE=ZDLRAF1XDB)(DISPATCHERS=4)
max_dispatchers integer

Step 6 - As SYSDBA, Update the DISPATCHER parameter

On Node1:

SQL> ALTER SYSTEM SET dispatchers= '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=zdlra1repl01-vip.us.oracle.com))(SDU=65536)) (SERVICE=ZDLRAXDB)(DISPATCHERS=4)' SCOPE=BOTH SID='zdlra1';

 

On Node2:

SQL> ALTER SYSTEM SET dispatchers= '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=zdlra1repl02-vip.us.oracle.com))(SDU=65536)) (SERVICE=ZDLRAXDB)(DISPATCHERS=4)' SCOPE=BOTH SID='zdlra2';

 

Step 7 - Verify DISPATCHER parameter settings have been updated and correct.

SQL> show parameter dispatcher;
NAME TYPE VALUE
------------------------- ----------- ------------------------------
dispatchers string (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=zdlra1ingest01-vip.us.oracle.com))(SDU=65536))(SERVICE=ZDLRAF1XDB)(DISPATCHERS=4),
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=zdlra1repl01-vip.us.oracle.com))(SDU=65536)) (SERVICE=ZDLRAXDB)(DISPATCHERS=4)

max_dispatchers integer

Step 8 - Setting up backup over different Ingest Networks

The following is an overview of a recovery appliance configured with 3 different ingests networks.
Network 1 is the primary ingest configured at time of installation.
Primary Ingest (zdlra1ingest01-vip 10.133.62.69 / zdlra1ingest02-vip 10.133.62.70)


Network 2 is an additional ingest configured on the replication network.
Replication Ingest (zdlra1repl01-vip 10.133.241.202 / zdlra1repl02-vip 10.133.241.203)

Network 3 is an additional ingest configured on the Infiniband network.
Infiniband Ingest (zdlra1adm01-priv3 192.168.40.165 / zdlra1adm02-priv3 192.168.40.167)

Verify the current order of the IP addresses in the rai_host table as RASYS on one of the Recovery Appliance compute nodes.

 

In the event that a client host is capable of routing to multiple networks in the Recovery Appliance's backup_ip_address list, client backups will run over the the first routable network in the backup_ip_address list. Consider this when selecting the ordering of the IP addresses contained in the backup_ip_address list.

SQL> select backup_ip_address from rai_host;

If required update the backup_ip_address as a list prioritizing interface specific to your network config.

SQL> UPDATE rai_host SET backup_ip_address='10.133.241.202, 10.133.62.69, 192.168.40.165' WHERE node_name = 'zdlra1adm01.us.oracle.com';
SQL> UPDATE rai_host SET backup_ip_address='10.133.241.203, 10.133.62.70, 192.168.40.167' WHERE node_name = 'zdlra1adm02.us.oracle.com';
SQL> commit;

 

Step 9 - Test Backup of client DB

rman target / catalog vpc-user/password@zdlra1ingest-scan1:1521/zdlra1:dedicated
RMAN> BACKUP DEVICE TYPE SBT_TAPE INCREMENTAL LEVEL 0 DATABASE INCLUDE CURRENT CONTROLFILE PLUS ARCHIVELOG DELETE INPUT;

 


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback