Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-2071568.1
Update Date:2018-04-25
Keywords:

Solution Type  Technical Instruction Sure

Solution  2071568.1 :   MaxRep: How to Separate Data Path from Management Path Traffic  


Related Items
  • Pillar Axiom Replication Engine (MaxRep)
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>Axiom>SN-DK: MaxRep-2x
  •  




In this Document
Goal
Solution
 Procedure
 Performance


Oracle Confidential PARTNER - Available to partners (SUN).
Reason: These steps should not be tried by the customers due to the risks of misconfiguring their existing solutions and the fact that they do no have access to the Best Practices document

Applies to:

Pillar Axiom Replication Engine (MaxRep) - Version 3.0 to 3.0 [Release 3.0]
Information in this document applies to any platform.

Goal

Ethernet data paths are used to synchronize data between engines in Asynchronous environment.

The goal of this document is to separate the data paths from the management traffic and/or add more network interfaces.

This document does not apply to MaxRep R2.

 

Solution

This document is using an Asynchronous configuration with High Availability (HA) as an example (4 engines in total). This type of MaxRep R3 configuration is common and this procedure has been used so far with this set up. The same steps can apply to Asynchronous without HA.

The steps do not apply to Synchronous with or without HA as the data paths are only using FC or iSCSI, not Ethernet.

There is no need to add more network card to segregate the data paths from the management traffic. MgtBond uses eth0 and eth2 (ports on the engine motherboard) and MaxRepAT uses eth1 and eth3 (also on the engine motherboard). MaxRepAT primary role is to be used as an iSCSI target to present virtual snapshots to hosts. In this case, the MaxRepAT bonding will be used for Ethernet transfer (iSCSI virtual snapshot can still be used).

The following modifications might disrupt current replications (volume pairs going into Resync Required).

Procedure

  1. Assign a separate IP address to the MaxRepAT bonding on all the engines using the support page of the Control Service engine (click on Configure Networking, select the engine, click on MaxRep AT and provide the network settings).

The MaxRepAT bonding does not need to be on the same subnet as MgtBond; there is a gateway field and the DNS server does not need to work on that network but the field needs to be populated. Logout from the support page once all the bonding interfaces have been configured.

 

  1. On the Control Service GUI, go to Settings > Advanced Configuration > Process Server Load Balancing
     

Process Service Traffic Load Balancing 

 

Use these steps to create a mapping for the data paths with the MaxRepAT interface, as by default the data paths are using the same mapping as the management interface (MgtBond).

This is the configuration used in this example:

 

Cluster PROD

MgtBond

MaxRepAT

CO-INMAGE-51

10.79.170.221

192.168.10.13

CO-INMAGE-54

10.79.170.224

192.168.10.43

 

Cluster DR (Control Service

MgtBond

MaxRepAT

CO-INMAGE-55

10.79.170.225

192.168.10.53

CO-INMAGE-56

10.79.170.226

192.168.10.63

 

There is an IP address for each cluster that is not listed, to avoid any confusion. The configuration is done using the cluster IP of the site designated as Control Service.

The next steps will cover the creation of 4 combinations for the first direction: PROD -> DR (source -> target). Only one combination will be used but the rest of the combinations need to be created in case of failover at the source and/or target sites.

  1. First combination:
    1. Under the first table on the left (Select Volume Replication Agent), select an engine at the target site (CO-INMAGE-55 or CO-INMAGE-56), in this case CO-INMAGE-55.

    2. Under the second table (Select Process Service), select an engine at the source site (CO-INMAGE-51 or CO-INMAGE-54), in this case CO-INMAGE-51.

    3. Under the 3rd table (Select NIC to Map), select MaxRepAT, in this case 192.168.10.13 (it belongs to the source engine selected under the 2nd table, CO-INMAGE-51).

    4. Click on the Save button.

    5. When prompted by the system to confirm your settings, click OK.

    6. (Optional: in case of mistakes) To delete any of the previously configured mappings, select the mapped item from the Already configured Agent-Process Server NIC Mapping table and then click Delete.

Process Service Traffic Load Balancing 1st combination

 

  1. Second combination:

Repeat the same steps as the first combination using the following: the other engine for Select Volume Replication Agent (CO-INMAGE-56), the same engine at the source site for Select Process Service (CO-INMAGE-51) and select the MaxRepAT interface of the source engine for Select NIC to Map.

  1. Third combination:

Repeat the same steps as above with the following values: use the first engine for Select Volume Replication Agent (CO-INMAGE-55), the second engine at the source site for Select Process Service (CO-INMAGE-54) and select the MaxRepAT interface of the source engine for Select NIC to Map.

  1. Fourth combination:

Repeat the same steps as above with the following values: use the second engine for Select Volume Replication Agent (CO-INMAGE-56), the same engine at the source site for Select Process Service (CO-INMAGE-54) and select the MaxRepAT interface of the source engine for Select NIC to Map.

 

Process Service Traffic Load Balancing all 4 combinations

 

Once all the mappings are done, the MaxRepAT interfaces of the active engines at each site will be serving data transfers from the PROD site to the DR site.

To verify that the data transfers are now happening using a separate network, ssh to the cluster IP of the source engines and run the following command to check the connections on port 9080 (it might take a few minutes to see connections on the new network):

[root@co-inmage-51 ~]# watch -n 5 'netstat -apn | grep :9080'

Every 5.0s: netstat -apn | grep :9080
Mon Nov  2 05:03:19 2015

tcp        0      0 0.0.0.0:9080                0.0.0.0:*                   LISTEN      30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:55135         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:55062         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:55065         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:55064         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:55063         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:55070         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:55071         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:55134         ESTABLISHED 30989/cxps

The process handling the connections on the source engine is cxps.

Run the same command at the target site, ssh to the cluster IP of the target engines:

Every 5.0s: netstat -apn | grep :9080
Mon Nov  2 05:09:43 2015

tcp        0      0 0.0.0.0:9080                0.0.0.0:*                   LISTEN      6076/cxps
tcp        0      0 192.168.10.53:55062         192.168.10.13:9080          ESTABLISHED 24276/cachemgr
tcp        0      0 192.168.10.53:55070         192.168.10.13:9080          ESTABLISHED 24276/cachemgr
tcp        0      0 192.168.10.53:55134         192.168.10.13:9080          ESTABLISHED 24276/cachemgr
tcp        0      0 192.168.10.53:55135         192.168.10.13:9080          ESTABLISHED 24276/cachemgr
tcp        0      0 192.168.10.53:55065         192.168.10.13:9080          ESTABLISHED 24276/cachemgr
tcp        0      0 192.168.10.53:55071         192.168.10.13:9080          ESTABLISHED 24276/cachemgr
tcp        0      0 192.168.10.53:55064         192.168.10.13:9080          ESTABLISHED 24276/cachemgr
tcp        0      0 192.168.10.53:55063         192.168.10.13:9080          ESTABLISHED 24276/cachemgr

 

The process handling the connections on the target engine is cachemgr.


The IP address of the source engine is associated with TCP port 9080 (same as the output for the active engine at the source site).

  1. Additional configuration for bi-directional replication:

In case some of the LUNs are used as source LUNs at the DR site, the following steps will have to be performed in order to replicate from DR -> PROD (source -> target). Four combinations are needed.

  1. Select one of the engines of the cluster with the target LUN(s) for Select Volume Replication Agent (CO-INMAGE-51), select one of the engines of the cluster with the source LUNs for Select Process Service (CO-INMAGE-55) and select the MaxRepAT interface of the source engine for Select NIC to Map.

  2. Select the other engine with the target LUN(s) for Select Volume Replication Agent (CO-INMAGE-54), select the same engine with the source LUNs for Select Process Service (CO-INMAGE-55) and select the MaxRepAT interface of the source engine for Select NIC to Map.

  3. Select the 1st engine with the target LUN(s) for Select Volume Replication Agent (CO-INMAGE-51), select the other engine with the source LUNs for Select Process Service (CO-INMAGE-56) and select the MaxRepAT interface of the source engine for Select NIC to Map.

  4. Select the 2nd engine with the target LUN(s) forSelect Volume Replication Agent (CO-INMAGE-54), select the same engine with the source LUNs for Select Process Service (CO-INMAGE-56) and select the MaxRepAT interface of the source engine for Select NIC to Map.

Process Service Traffic Load Balancing all 8 combinations

 

This is the output on the PROD active engine with replications going both ways:

Every 5.0s: netstat -apn | grep :9080
Mon Nov  2 18:02:25 2015

tcp        0      0 0.0.0.0:9080                0.0.0.0:*                   LISTEN      30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:41854         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:40174         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:40175         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:38736         192.168.10.53:9080          ESTABLISHED 723/cachemgr
tcp        0      0 192.168.10.13:38176         192.168.10.53:9080          ESTABLISHED 723/cachemgr
tcp        0      0 192.168.10.13:9080          192.168.10.53:41853         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:37377         192.168.10.53:9080          ESTABLISHED 723/cachemgr
tcp        0      0 192.168.10.13:9080          192.168.10.53:41566         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:41567         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:37378         192.168.10.53:9080          ESTABLISHED 723/cachemgr
tcp        0      0 192.168.10.13:9080          192.168.10.53:40296         ESTABLISHED 30989/cxps
tcp        0      0 192.168.10.13:9080          192.168.10.53:40295         ESTABLISHED 30989/cxps

 

And the output on the DR active engine with replications going both ways (same patterns as above):

Every 5.0s: netstat -apn | grep :9080
Mon Nov  2 06:33:20 2015

tcp        0      0 0.0.0.0:9080                0.0.0.0:*                   LISTEN      6076/cxps
tcp        0      0 192.168.10.53:42221         192.168.10.13:9080          ESTABLISHED 28957/cachemgr
tcp        0      0 192.168.10.53:9080          192.168.10.13:39126         ESTABLISHED 6076/cxps
tcp        0      0 192.168.10.53:42220         192.168.10.13:9080          ESTABLISHED 28957/cachemgr
tcp        0      0 192.168.10.53:42217         192.168.10.13:9080          ESTABLISHED 28957/cachemgr
tcp        0      0 192.168.10.53:42218         192.168.10.13:9080          ESTABLISHED 28957/cachemgr
tcp        0      0 192.168.10.53:41853         192.168.10.13:9080          ESTABLISHED 28957/cachemgr
tcp        0      0 192.168.10.53:41854         192.168.10.13:9080          ESTABLISHED 28957/cachemgr
tcp        0      0 192.168.10.53:42216         192.168.10.13:9080          ESTABLISHED 28957/cachemgr
tcp        0      0 192.168.10.53:9080          192.168.10.13:39124         ESTABLISHED 6076/cxps
tcp        0      0 192.168.10.53:9080          192.168.10.13:38176         ESTABLISHED 6076/cxps
tcp        0      0 192.168.10.53:42219         192.168.10.13:9080          ESTABLISHED 28957/cachemgr
tcp        0      0 192.168.10.53:9080          192.168.10.13:39125         ESTABLISHED 6076/cxps

 

Performance

For performance enhancements, it is possible to add 3 Niantic cards (dual Optical Ethernet ports with SFI/SFP+) on each engine (see MaxRep R3.X for SAN Best Practices Guide for more information).  The data transfer with the FS1s will be done using FC and the data transfer between the engines will be done using the optical network cards.

This configuration has 4 network bonds: MgtBond, MaxRepAT, AiForSource and AiForTarget. MgtBond is using eth0 & eth2 on the motherboard and the 3 other bonds use the Optical SFP ports on the cards.

As an example, when doing bi-directional replication, the engines can be configured with AiForSource as the interface to replicate from the source site to the DR site and the AiForTarget as the interface to replicate from the DR site to the source site using separate subnets. MaxRepAT does not have to be used.

The same kind of configuration can be achieved using 2 Twinville cards (dual copper Ethernet ports) on each engine (no need for a 3rd card as we also use the copper ports eth1 & eth3 on the motherboard). It might be possible to achieve the optical configuration with 2 Niantic cards instead of 3 but this configuration is not supported via the GUI and beyond the scope of this document.


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback