![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Solution Type Technical Instruction Sure Solution 2126047.1 : Post Install - Replication Network Configuration for ZDLRA
In this Document
Applies to:Zero Data Loss Recovery Appliance X5 Hardware - Version All Versions to All Versions [Release All Releases]Linux x86-64 GoalAdd replication network to an existing Zero Data Loss Recovery Appliance. SolutionConfiguring the Replication Network for the ZDLRAThis document focuses on use of the supplied replication network interface.
If you are utilizing a tagged VLAN interface to be configured as ingest please refer to note 2047411.1.
Step 1 - Add the Network Interfaces to the compute node ifcfg filesOn each host you will edit the following files under /etc/sysconfig/network-scripts a) ifcfg-eth2 Files (a) and (b) will exist and should look like the following - noting that the DEVICE string is different for ifcfg-eth3 #### DO NOT REMOVE THESE LINES ####
#### %GENERATED BY CELL% #### DEVICE=eth2 BOOTPROTO=none ONBOOT=no HOTPLUG=no IPV6INIT=no Files (c) and (d) will NOT exist and will need to be created First edit files (a) and (b) on each host such that they become slave interfaces to the bondeth1 interface that we will create in the next step. The files should ultimately look like #### DO NOT REMOVE THESE LINES ####
Note that USERCTL, MASTER, SLAVE are new parameters and that ONBOOT is changed from NO to YES
Repeat this for file (b) ifcfg-eth3 and also repeat for all the compute nodes Next we need to create files (c) ifcfg-bondeth1 on the first host. The file needs to look like #### DO NOT REMOVE THESE LINES ####
#### %GENERATED BY CELL% #### DEVICE=bondeth1 USERCTL=no BOOTPROTO=none ONBOOT=yes BONDING_OPTS="mode=4 miimon=100 downdelay=200 updelay=200 num_grat_arp=100 lacp_rate=1 xmit_hash_policy=layer3+4" IPV6INIT=no IPADDR=192.133.241.196 NETMASK=255.255.255.192 NETWORK=192.133.241.192 BROADCAST=192.133.241.255 GATEWAY=192.133.241.193
Note the use of BONDING_OPTS above which are compatible with LACP configured bonding, as per the Installation Template. BONDING_OPTS="mode=active-backup miimon=100 downdelay=5000 updelay=5000 num_grat_arp=100" Next still on the first host, we need to create file (d) route-bondeth1. Before we do this, we need to verify that ip table #211 is not in use
# grep table route-* route-bondeth0: x.x.x.x/y dev bondeth0 table 210
Note that there should be 6 lines reported for your configuration - if there is a difference, then please check back with Development.
You will also notice that tables 180, 181, 210 and 220 are currently in use, and table 211 is NOT in use We can now create file (d) route-bondeth1 to look like On Node1: [root@zdlra1adm01 network-scripts]# more route-bondeth1
192.133.241.192/26 dev bondeth1 table 211 default via 192.133.241.193 dev bondeth1 table 211 [root@zdlra1adm01 network-scripts]# We now need to repeat the creation of file (c) and (d) on the second host. Step 2 - Edit the /etc/hosts file to include replication interfaces - (All Compute Nodes)- Use compute node names - Add the replication interface to /etc/hosts on first node - Add the replication interface to /etc/hosts on second node Step 3 - Append /etc/sysctl.conf with the entries for bondeth1 interface - (All Compute Nodes)net.ipv6.conf.bondeth1.accept_ra = 0
net.ipv4.conf.bondeth1.rp_filter = 1 Step 4 - Edit /etc/modprobe.d/exadata.conf to allow bonding on bondeth1 - (All Compute Nodes)alias bondeth1 bonding
Step 5 - Restart the ZDLRA - Next on the listStop the Recovery Appliance
Reboot the two Compute Servers Restart the Recovery Appliance
Step 6 - Verify bondeth1 is correctRun these tests: # netstat -rna
# ifconfig bondeth1 # cat /proc/net/bonding/bondeth1
As ORACLE user, Run a network_throughput_test between the two Compute Nodes (Replace sample names with actual names)
on node #1 and verify trusted ssh is setup $ echo zdlra1adm01 > /home/oracle/sending
$ dcli -g /home/oracle/sending -l oracle date Create a file called /home/oracle/receiving listing the admin host for compute node #2 - on node #1 and verify trusted ssh is setup $ echo zdlra1adm02 > /home/oracle/receiving
$ dcli -g /home/oracle/receiving -l oracle date Run the network throughput test for bondeth1 $ /opt/oracle.RecoveryAppliance/client/network_throughput_test.sh -s /home/oracle/sending -r /home/oracle/receiving -i bondeth1 Expected output: The network bandwidth will probably not reach 2GBytes/sec - but for an active/active interface correctly configured, should be more than 1GByte/sec Step 7 - Confirm that the rule and route files for bondeth1 look like the cat output below (All Compute Nodes)# cd /etc/sysconfig/network-scripts # cat rule-bondeth1
Step 8 - Verify connectivity to replication interfaces (All Compute Nodes)# ping -c 3 zdlra1repl01.domain.com
# ping -c 3 zdlra1repl02.domain.com Step 9 - Verify that the traceoute output for the following two lines are the sameFrom Node 1: # traceroute -i bondeth1 zdlra1repl02.domain.com Step 10 - Verify that the traceoute output for the following two lines are the sameFrom Node 2: # traceroute -i bondeth1 zdlra1repl01.domain.com
Note: Step 11 - As ORACLE user, Verify that bondeth1 interface is listedEnsure they are listed with he correct IP addresses and subnet masks. $ /u01/app/12.1.0.2/grid/bin/oifcfg iflist -p -n Expected output: Step 12 - As ROOT user, Verify that network #2 is NOT being used.Note: (The command should fail with the error below) # /u01/app/12.1.0.2/grid/bin/srvctl config network -k 2 Expected output: Step 13 - Add the network #2 to the replication network# /u01/app/12.1.0.2/grid/bin/srvctl add network -k 2 -S 192.133.241.192/255.255.255.192/bondeth1
Step 14 - Verify the new network is configured correctly# /u01/app/12.1.0.2/grid/bin/srvctl config network -k 2 Expected output: Step 15 - Add the replication VIPS and SCANS to the new CRS network# /u01/app/12.1.0.2/grid/bin/srvctl add vip -n zdlra1adm01 -A zdlra1repl01-vip.domain.com/255.255.255.192/bondeth1 -k 2 # /u01/app/12.1.0.2/grid/bin/srvctl add vip -n zdlra1adm02 -A zdlra1repl02-vip.domain.com/255.255.255.192/bondeth1 -k 2 # /u01/app/12.1.0.2/grid/bin/srvctl add scan -netnum 2 -scanname zdlra1repl-scan.domain.com Step 16 - Start the replication VIPS on the new CRS network# /u01/app/12.1.0.2/grid/bin/srvctl start vip -i zdlra1repl01-vip.domain.com # /u01/app/12.1.0.2/grid/bin/srvctl start vip -i zdlra1repl02-vip.domain.com Step 17 - Verify that the VIPs are running$ srvctl status vip -vip zdlra1repl01-vip.domain.com
Step 18 - As ORACLE user, Add and start the database listener and SCAN Listener$ /u01/app/12.1.0.2/grid/bin/srvctl add listener -l LISTENER_REPL -p 1522 -k 2 $ /u01/app/12.1.0.2/grid/bin/srvctl start listener -l LISTENER_REPL $ /u01/app/12.1.0.2/grid/bin/srvctl status listener -l LISTENER_REPL Expected output: $ srvctl add scan_listener -netnum 2 -listener LISTENER_REPL -endpoints TCP:1522 $ srvctl start scan_listener -netnum 2 $ srvctl status scan_listener -netnum 2 Expected output: As SYSDBA Alter the listener_networks to include new zdlra1repl-scan1 for network 2 for each Recovery Appliance node. On Node1: ALTER SYSTEM SET listener_networks='((NAME=net1) (LOCAL_LISTENER= (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.133.62.69)(PORT=1521)))))','((NAME=net1)(REMOTE_LISTENER=zdlra1ingest-scan1.us.oracle.com:1521))', '((NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.133.241.202)(PORT=1522)))))','((NAME=net2)(REMOTE_LISTENER=zdlra1repl-scan1.us.oracle.com:1522))' SCOPE=SPFILE SID='zdlra1';
On Node2: ALTER SYSTEM SET listener_networks= '((NAME=net1)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.133.62.70)(PORT=1521)))))','((NAME=net1)(REMOTE_LISTENER=zdlra1ingest-scan1.us.oracle.com:1521))','((NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.133.241.203)(PORT=1522)))))','((NAME=net2)(REMOTE_LISTENER=zdlra1repl-scan1.us.oracle.com:1522))' SCOPE=SPFILE SID='zdlra2';
Step 19 - Verify the interfaces can be pinged$ ping -c 3 zdlra1repl01-vip Configuring the Replication Network for ReplicationStep 1 - Verify the hosts table in the Recovery ApplianceLogin to SQL*Plus as the rasys schema owner $ sqlplus rasys/ra SQL> select node_name,replication_ip_address from host; NODE_NAME
Step 2 - Update the replication_ip_addressSQL> update HOST set REPLICATION_IP_ADDRESS='192.133.241.196' where NODE_NAME = ‘zdlra1adm01.domain.com'; 1 row updated. SQL> update HOST set REPLICATION_IP_ADDRESS=’192.133.241.197' where NODE_NAME = ‘zdlra1adm02.domain.com'; 1 row updated. SQL> commit;
Configuring the Replication Network as IngestStep 1 - As RASYS, Update the BACKUP_IP_ADDRESS columnQuery current settings from rai_host:
SQL> SELECT * FROM rai_host; NODE_NAME ---------------------------------- zdlra1adm01.us.oracle.com ADMIN_IP_ADDRESS ---------------------------------- 10.133.40.121 BACKUP_IP_ADDRESS ---------------------------------- 10.133.62.69 REPLICATION_IP_ADDRESS ---------------------------------- NODE_NAME ---------------------------------- zdlra1adm02.us.oracle.com ADMIN_IP_ADDRESS ---------------------------------- 10.133.40.122 BACKUP_IP_ADDRESS ---------------------------------- 10.133.62.70 REPLICATION_IP_ADDRESS ----------------------------------
Step 2 - Update the backup_ip_address valueSQL> UPDATE rai_host SET backup_ip_address='10.133.241.202,'|| backup_ip_address WHERE node_name = 'zdlra1adm01.us.oracle.com';
SQL> UPDATE rai_host SET backup_ip_address='10.133.241.203,'|| backup_ip_address WHERE node_name = 'zdlra1adm02.us.oracle.com'; Step 3 - Save your changes to the tableSQL> commit;
Step 4 - Query the values from rai_host verify settings were updated and correctSQL> SELECT * FROM rai_host;
NODE_NAME ---------------------------------- zdlra1adm01.us.oracle.com ADMIN_IP_ADDRESS ---------------------------------- 10.133.40.121 BACKUP_IP_ADDRESS ---------------------------------- 10.133.241.203, 10.133.62.69 REPLICATION_IP_ADDRESS ---------------------------------- NODE_NAME ---------------------------------- zdlra1adm02.us.oracle.com ADMIN_IP_ADDRESS ---------------------------------- 10.133.40.122 BACKUP_IP_ADDRESS ---------------------------------- 10.133.241.203, 10.133.62.70 REPLICATION_IP_ADDRESS ---------------------------------- Step 5 - Verify the current DISPATCHER parameter settings AS SYSDBA on each node.SQL> show parameter dispatcher;
NAME TYPE VALUE ------------------------- ----------- ------------------------------ dispatchers string (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=zdlra1ingest01-vip.us.oracle.com))(SDU=65536))(SERVICE=ZDLRAF1XDB)(DISPATCHERS=4) max_dispatchers integer Step 6 - As SYSDBA, Update the DISPATCHER parameterOn Node1: SQL> ALTER SYSTEM SET dispatchers= '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=zdlra1repl01-vip.us.oracle.com))(SDU=65536)) (SERVICE=ZDLRAXDB)(DISPATCHERS=4)' SCOPE=BOTH SID='zdlra1';
On Node2: SQL> ALTER SYSTEM SET dispatchers= '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=zdlra1repl02-vip.us.oracle.com))(SDU=65536)) (SERVICE=ZDLRAXDB)(DISPATCHERS=4)' SCOPE=BOTH SID='zdlra2';
Step 7 - Verify DISPATCHER parameter settings have been updated and correct.SQL> show parameter dispatcher; max_dispatchers integer Step 8 - Setting up backup over different Ingest NetworksThe following is an overview of a recovery appliance configured with 3 different ingests networks.
Network 3 is an additional ingest configured on the Infiniband network. Verify the current order of the IP addresses in the rai_host table as RASYS on one of the Recovery Appliance compute nodes.
In the event that a client host is capable of routing to multiple networks in the Recovery Appliance's backup_ip_address list, client backups will run over the the first routable network in the backup_ip_address list. Consider this when selecting the ordering of the IP addresses contained in the backup_ip_address list. SQL> select backup_ip_address from rai_host;
If required update the backup_ip_address as a list prioritizing interface specific to your network config. SQL> UPDATE rai_host SET backup_ip_address='10.133.241.202, 10.133.62.69, 192.168.40.165' WHERE node_name = 'zdlra1adm01.us.oracle.com';
SQL> UPDATE rai_host SET backup_ip_address='10.133.241.203, 10.133.62.70, 192.168.40.167' WHERE node_name = 'zdlra1adm02.us.oracle.com'; SQL> commit;
Step 9 - Test Backup of client DBrman target / catalog vpc-user/password@zdlra1ingest-scan1:1521/zdlra1:dedicated
RMAN> BACKUP DEVICE TYPE SBT_TAPE INCREMENTAL LEVEL 0 DATABASE INCLUDE CURRENT CONTROLFILE PLUS ARCHIVELOG DELETE INPUT;
Attachments This solution has no attachment |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|