Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-2063071.1
Update Date:2018-01-30
Keywords:

Solution Type  Technical Instruction Sure

Solution  2063071.1 :   SuperCluster: Enabling 802.1Q VLAN  


Related Items
  • Oracle SuperCluster T5-8 Full Rack
  •  
  • Oracle SuperCluster T5-8 Half Rack
  •  
  • Solaris Operating System
  •  
  • Oracle SuperCluster M6-32 Hardware
  •  
  • SPARC SuperCluster T4-4
  •  
  • Oracle SuperCluster T5-8 Hardware
  •  
Related Categories
  • PLA-Support>Eng Systems>Exadata/ODA/SSC>SPARC SuperCluster>DB: SuperCluster_EST
  •  


802.1Q Virtual LANs (VLANs) can be configured on the SuperCluster 10Gb Ethernet Client Access Network. VLAN is a standard technology for network segregation. Using VLAN  in conjunction with Solaris Zones is a good starting point to create multiple segregated environments on a single  SuperCluster Engineered System.

In this Document
Goal
Solution
 Background
 Assumptions
 Creating a VLAN in a Solaris 11 Dedicated Domain
 Creating a VLAN in an IO Domain
 Creating a VLAN in a Solaris 11 Zone
 Creating a VLAN in a Solaris 10 Dedicated Domain
 Configuring Clusterware, Listener And Database With The VLAN
 Register The VLAN Subnet In The Clusterware
 Configure The Database To Use the VLAN Listener
References


Applies to:

Oracle SuperCluster T5-8 Full Rack - Version All Versions and later
Solaris SPARC Operating System - Version 10 1/13 U11 to 11.2 [Release 10.0 to 11.0]
Oracle SuperCluster M6-32 Hardware - Version All Versions and later
SPARC SuperCluster T4-4 - Version All Versions and later
Oracle SuperCluster T5-8 Hardware - Version All Versions and later
Oracle Solaris on SPARC (64-bit)

Goal

 This document explains how to create VLANs on the SuperCluster Client Access Network, in Application and Database Domains, and in Application and Database Zones.

Solution

Background

For payload traffic, the SuperCluster is accessed through a 10Gb Ethernet Client Access Network. If there is a need to access the SuperCluster from multiple subnets, Virtual LANs (VLANs) can be used.

VLANs can be configured during the initial install and configuration performed by Oracle Advanced Customer Services (ACS). This document explains how to create an additional VLAN once the SuperCluster is already configured.

VLAN (a.k.a. VLAN tagging) is a standard Ethernet technology for network segregation. Using VLAN  in conjunction with Solaris Zones is a good starting point to create multiple segregated environments on a single  SuperCluster.

Assumptions

The following applies to the document:

  1. The changes only affect the SuperCluster Domains and Solaris Zones. No changes are permitted on storage cells
  2. Peer ethernet switches to which the SuperCluster is connected are configured with the VLAN
  3. For VLAN to be used by databases, Clusterware version 11.2.0.2 or higher
  4. The examples use VLAN ID 10 with the IP subnet 192.168.10/24 (netmask 255.255.255.0)

 

Note: The steps and examples use a convention for the OS prompt to indicate which user runs the different commands. When the prompt is (root)#, the command is run as root. When the prompt is (oracleGI)$, the command is run as the Oracle Grid Infrastructure software owner account. When the prompt is (oracleDB)$, the command is run as the Oracle RDBMS software owner account. In many cases, the Oracle Grid Infrastructure and RDBMS owners may be the same - often "oracle" for both.

Creating a VLAN in a Solaris 11 Dedicated Domain

Dedicated Domains are created by ACS during the initial install and configuration of the SuperCluster - as opposed to IO Domains that can be created by the customer at any time during the SuperCluster life-cycle (IO Domains require SuperCluster software 2.0 or higher). A dedicated Domain that hosts an Oracle Database using the storage cells is a DB Domain - as opposed to a dedicated App Domain that cannot access the storage cells over the Infiniband interconnect. The following instructions apply for both DB and App Domains.

Creating an additional VLAN in a Solaris 11 Dedicated Domain consists of the following steps:

  1. Identify the two 10GbE physical interfaces that provide the Domain with access to the Client Access Network
  2. Create the VLAN on each of the two physical interfaces using the dladm command
  3. Set up IPMP for the VLAN using the ipadm command

On SuperCluster network interfaces are paired in IP Multipathing (IPMP) groups that provide redundancy. For the Client Access Network these groups are named sc_ipmp0 in App Domains and bondeth0 in DB Domains.

  
Note: On SuperCluster, for detecting interface failures the IPMP link-based detection mode is used (as opposed to the probe-based detection mode).
  

Identify the 10GbE physical interfaces used by this group. As an example, in a DB Domain:

(root)# ipmpstat -g -o GROUP,INTERFACES | grep bondeth0
bondeth0    net4 (net5)

The bondeth0 group uses interfaces net4 and net5. The first is active while the second is standby. To ease load balancing between the two interfaces, the VLAN IPMP group can be configured the opposite way, with an active link on net5 and a standby on net4.

Create a VLAN with ID 10 on each interface:

(root)# dladm create-vlan -l net4 -v 10 net4vlan10
(root)# dladm create-vlan -l net5 -v 10 net5vlan10

(root)# dladm show-vlan
LINK                VID  SVID PVLAN-TYPE  FLAGS  OVER
net4vlan10          10   --   --          -----  net4
net5vlan10          10   --   --          -----  net5

That's it for the network layer 2 (Ethernet VLAN). Now the layer 3 (IP, IPMP).

Create an IP interface on each of the VLAN:

(root)# ipadm create-ip net4vlan10
(root)# ipadm create-ip net5vlan10
(root)# ipadm show-if
IFNAME     CLASS    STATE    ACTIVE OVER
...
net4vlan10 ip       down     no     --
net5vlan10 ip       down     no     --

 

Create an vlan10_ipmp0 IPMP group and add the IP interfaces to it:

(root)# ipadm create-ipmp vlan10_ipmp0
(root)# ipadm add-ipmp -i net4vlan10,net5vlan10 vlan10_ipmp0
(root)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0
vlan10_ipmp0 net4vlan10 (net5vlan10)

 Make net5vlan10 the active interface, net4vlan10 the standby one:

(root)# ipadm set-ifprop -p standby=off -m ip net5vlan10
(root)# ipadm set-ifprop -p standby=on -m ip net4vlan10
(root)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0
vlan10_ipmp0 net5vlan10 (net4vlan10)

 

An IP address from the VLAN subnet can now be assigned to the IPMP group: 

(root)# ipadm create-addr -T static -a 192.168.10.10/24 vlan10_ipmp0/v4
(root)# ipadm show-addr | grep vlan10
vlan10_ipmp0/v4   static   ok           192.168.10.10/24

Creating a VLAN in an IO Domain

IO Domains can be created at any time during the SuperCluster life-cycle. They are created with the IO Domain Creation Tool (a.k.a. IODCT) that is part of SuperCluster software 2.0 or higher. As of today IODCT does not cover VLAN creation. The VLAN creation is performed once the Domain is created.

IO Domains run Solaris 11. An IO Domain that hosts an Oracle Database using the storage cells is a DB Domain - as opposed to an App IO Domain that cannot access the storage cells over the Infiniband interconnect. The following instructions apply for both DB and App Domains, and whether the VLAN is created directly in the IO Domain or in a Solaris Zone that runs in the IO Domain.

IO Domains are connected to the Client Access Network through SR-IOV Virtual Functions (VFs). These VFs must be modified to enable not only the VLAN ID but also the creation of VLAN interfaces on top of them in the IO Domain.

Creating a VLAN in an IO Domain consists of the following steps:

  1. Identify the Primary Domain hosting the IO Domain
  2. From the Primary Domain, identify the two VFs that provide access to the Client Access Network
  3. From the Primary Domain, halt the IO Domain, modify the two VFs, and restart the IO Domain
  4. From the IO Domain, identify the interfaces to the Client Access Network and create VLANs on top of them
  5. Set up IPMP on top of the VLANs

Steps 4. and 5. are not required if the VLAN is to be created in a Solaris Zone inside the IO Domain.

Indentify the Primary Domain hosting the IO Domain

(root)# virtinfo -a
Domain role: LDoms guest I/O
Domain name: ssccn1-io-dom01
Domain UUID: 074846e8-246e-4f16-8e21-e4ae3183f6bd
Control domain: primdom01
Chassis serial#: AK88888887

In this example, the Primary Domain is named primdom01. Note that this is a Domain name and that it may not match the hostname of the Primary Domain. If need be, refer to the configuration worksheet of the SuperCluster to find the Primary Domain hostname and connect to it.

Indentify the two VFs that provide access to the Client Access Network

Connect to the Primary Domain and run the ldm list-io command. Search for the IO Domain name in the command's output:

(root@primdom01)# ldm list-io | grep ssccn1-io-dom01 | grep VNET
/SYS/RCSA/PCIE6/IOVNET.PF0.VF2 VF pci_12 ssccn1-io-dom01
/SYS/RCSA/PCIE6/IOVNET.PF1.VF2 VF pci_12 ssccn1-io-dom01

The IO Domain is using 2 VFs, both named VF2, each one seating on a different physical functions PF0 and PF1.


Halt the IO Domain, modify the two VFs, and restart the IO Domain:

(root@primdom01)# ldm stop ssccn1-io-dom01
LDom ssccn1-io-dom01 stopped

(root@primdom01)# ldm set-io vid=10 alt-mac-addrs=auto,auto,auto /SYS/RCSA/PCIE6/IOVNET.PF0.VF2
(root@primdom01)# ldm set-io vid=10 alt-mac-addrs=auto,auto,auto /SYS/RCSA/PCIE6/IOVNET.PF1.VF2

# Check result
(root@primdom01)# ldm list-io -l | grep ssccn1-io-dom01
/SYS/RCSA/PCIE6/IOVNET.PF0.VF2 VF pci_12 ssccn1-io-dom01
vlan IDs = 10
/SYS/RCSA/PCIE6/IOVNET.PF1.VF2 VF pci_12 ssccn1-io-dom01
vlan IDs = 10


(root@primdom01)# ldm start ssccn1-io-dom01
LDom ssccn1-io-dom01 started

Note that the vid and the alt-mac-addrs attributes are set on the VFs. The second enables the creation of VLAN of VNIC on top of the VF by reserving slots for additional MAC addresses: one slot is reserved for each 'auto' keyword. In our example 3 slots are reserved. Each can be used to create a VLAN in the IO Domain, or a VLAN in a Solaris Zone hosted by the IO Domain. So in this example, 3 Solaris Zones connected to VLAN(s) could be created.

From now on, the instructions are similar to the ones used to create a VLAN in a Dedicated Domain.

Identify the interfaces to the Client Access Network

On SuperCluster network interfaces are paired in IP Multipathing (IPMP) groups that provide network redundancy. For the Client Access Network these groups are named sc_ipmp0 in App Domains and bondeth0 in DB Domains. These groups can be used to identify the VFs. As an example, in an App IO Domain: 

(root)# ipmpstat -g -o GROUP,INTERFACES | grep sc_ipmp0
sc_ipmp0 net0 (net1)

The sc_ipmp0 group uses interfaces net0 (active) and net1 (standby). These are the VFs that have just been modified.

Create the VLAN 10 on top of the VFs:

(root)# dladm create-vlan -l net0 -v 10 net0vlan10
(root)# dladm create-vlan -l net1 -v 10 net1vlan10

(root)# dladm show-vlan
LINK VID SVID PVLAN-TYPE FLAGS OVER
net0vlan10 10 -- -- ----- net0
net1vlan10 10 -- -- ----- net1

That's it for the network layer 2 (Ethernet VLAN). Now the layer 3 (IP, IPMP).

Create an IP interface on each of the VLAN: 

(root)# ipadm create-ip net0vlan10
(root)# ipadm create-ip net1vlan10
(root)# ipadm show-if
IFNAME CLASS STATE ACTIVE OVER
...
net0vlan10 ip down no --
net1vlan10 ip down no --

Create an vlan10_ipmp0 IPMP group and add the IP interfaces to it:

(root)# ipadm create-ipmp vlan10_ipmp0
(root)# ipadm add-ipmp -i net0vlan10,net1vlan10 vlan10_ipmp0
(root)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0
vlan10_ipmp0 net0vlan10 (net1vlan10)

Make net1vlan10 the active interface, net0vlan10 the standby one:

(root)# ipadm set-ifprop -p standby=off -m ip net1vlan10
(root)# ipadm set-ifprop -p standby=on -m ip net0vlan10
(root)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0
vlan10_ipmp0 net1vlan10 (net0vlan10)

An IP address from the VLAN subnet can now be assigned to the IPMP group:

(root)# ipadm create-addr -T static -a 192.168.10.10/24 vlan10_ipmp0/v4
(root)# ipadm show-addr | grep vlan10
vlan10_ipmp0/v4 static ok 192.168.10.10/24

Creating a VLAN in a Solaris 11 Zone

Creating an additional VLAN in a Solaris 11 Zone consists of the following steps:

  1. Identify the two 10GbE physical interfaces that provide access to the Client Access Network for the Domain hosting the Zone
  2. In the Zone configuration, declare two VLAN interfaces, one on each 10GbE interface
  3. Reboot the Zone
  4. In the Zone, set up IPMP for the VLAN

If the Solaris Zone is created in an IO Domain, make sure that VLAN is enabled in the IO Domain first. Please refer to the section Creating a VLAN in an IO Domain.

On SuperCluster, network interfaces are paired in IP Multipathing (IPMP) groups. These groups provide network redundancy. For the Client Access Network they are named sc_ipmp0 in App Domains and bondeth0 in DB Domains.

Identify the 10GbE physical interfaces used by this group. As an example, in an App Domain:

(root)# ipmpstat -g -o GROUP,INTERFACES | grep sc_ipmp0
sc_ipmp0    net0 (net1)

The sc_ipmp0 group uses interfaces net0 and net1. The first is active while the second is standby. To ease load balancing between the two interfaces, the VLAN IPMP group in the Zone can be configured the opposite way, with an active link on net1 and a standby on net0.

Using anet sections, declare two Virtual NIC (VNIC) in the Zone configuration, namely net0vlan10 and net1vlan11. The physical interface - on top of which a VNIC is located - is declared using the lower-link attribute. Ensure that the VNICs are actually VLAN-tagged by setting their vlan-id attribute. In this example the Zone is named pg1Zone1:

# zonecfg -z pg1Zone1
zonecfg:pg1Zone1> add anet
zonecfg:pg1Zone1:anet> set linkname=net0vlan10
zonecfg:pg1Zone1:anet> set lower-link=net0
zonecfg:pg1Zone1:anet> set vlan-id=10
zonecfg:pg1Zone1:anet> end
zonecfg:pg1Zone1> add anet
zonecfg:pg1Zone1:anet> set linkname=net1vlan10
zonecfg:pg1Zone1:anet> set lower-link=net1
zonecfg:pg1Zone1:anet> set vlan-id=10
zonecfg:pg1Zone1:anet> end
zonecfg:pg1Zone1> exit

 

If the Zone is a DB Zone with Clusterware already running, before rebooting the zone, login in it with zlogin to stop the Clusterware and disable its automatic restart.

(root)# zoneadm -z pg1Zone1 reboot
(root)# zlogin pg1Zone1
[Connected to zone 'pg1Zone1' pts/4]
Oracle Corporation    SunOS 5.11    11.1    April 2014
You have new mail.
(root@pg1Zone1)# crsctl stop crs -f
(root#pg1Zone1)# crsctl disable crs

 

Reboot the Zone with zoneadm and login in it with zlogin:

(root)# zoneadm -z pg1Zone1 reboot
(root)# zlogin pg1Zone1
[Connected to zone 'pg1Zone1' pts/4]
Oracle Corporation    SunOS 5.11    11.1    April 2014
You have new mail.
(root@pg1Zone1)# dladm show-link
net0vlan10          vnic      1500   up       ?
net1vlan10          vnic      1500   down     ?

The two VNICs are now visible in the Zone.

Still in the Zone as root, create a vlan10_ipmp0 IPMP group and add the VNICs to the group:

(root@pg1Zone1)# ipadm create-ipmp vlan10_ipmp0
(root@pg1Zone1)# ipadm create-ip net0vlan10
(root@pg1Zone1)# ipadm create-ip net1vlan10
(root@pg1Zone1)# ipadm add-ipmp -i net0vlan10,net1vlan10 vlan10_ipmp0
(root@pg1Zone1)# ipadm set-ifprop -p standby=on -m ip net0vlan10
(root@pg1Zone1)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0
vlan10_ipmp0 net1vlan10 [net0vlan10]

 

An IP address from the VLAN subnet can now be assigned to the IPMP group:

(root@pg1Zone1)# ipadm create-addr -T static -a 192.168.10.10/24 vlan10_ipmp0/v4
(root@pg1Zone1)# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
...
vlan10_ipmp0/v4   static   ok           192.168.10.10/24


If the Zone is a DB Zone, restart the Clusterware:

(root#pg1Zone1)# crsctl enable crs
(root#pg1Zone1)# crsctl start crs

  

Creating a VLAN in a Solaris 10 Dedicated Domain

Creating an additional VLAN in a Solaris 10 Dedicated Domain consists of the following steps:

  1. Identify the two 10GbE physical interfaces that provide the Domain with access to the Client Access Network
  2. Configure VLAN-tagged logical interfaces on each of the two physical interfaces and pair these logical interfaces in an IPMP group using the ifconfig command

With Solaris 10 the two physical interfaces for the Client Access Network are identified as follow:

(root)# dladm show-link | grep ixgbe
ixgbe0          type: non-vlan  mtu: 1500       device: ixgbe0
ixgbe1          type: non-vlan  mtu: 1500       device: ixgbe1

ixgdbe is name of the 10GbE device driver. In this example the two physical interfaces are listed in column one: ixgbe0 and ixgbe1.

If more than two interfaces are listed, check which belong to the sc_ipmp0 IPMP group. This is the IPMP group that provides redundancy for the Client Access Network in App Domains:

(root)# ifconfig ixgbe0
ixgbe0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 10.129.184.65 netmask ffffff00 broadcast 10.129.184.255
        groupname sc_ipmp0
        ether 0:1b:21:c8:60:60

(root)# ifconfig ixgbe1
ixgbe1: flags=69000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 0 index 3
        inet 0.0.0.0 netmask 0
        groupname sc_ipmp0
        ether 0:1b:21:c8:60:61

In this example both interfaces have sc_ipmp0 as their groupname, as expected. Also, ixgbe1 is in standby mode while ixgbe0 is active. To ease load balancing between the two interfaces, the VLAN IPMP group can be configured the opposite way, with an active link on ixgbe1 and a standby on ixgbe0.

The next step is to create VLAN interfaces on both 10GbE links and include them in a vlan10_ipmp0 IPMP group.
The VLAN is specified by the following naming convention for the interface:

     intfNumber = vlan*10K + physLinkNumber

  vlan=10 and physLink=ixgbe0 => intfNumber=10000
  vlan=10 and physLink=ixgbe1 => intfNumber=10001

(root)# ifconfig ixgbe10001 plumb
(root)# ifconfig ixgbe10000 plumb
(root)# ifconfig ixgbe10001 netmask + broadcast + group vlan10_ipmp0
(root)# ifconfig ixgbe10000 standby  group vlan10_ipmp0
(root)# dladm show-link | grep ixgbe
ixgbe0          type: non-vlan  mtu: 1500       device: ixgbe0
ixgbe10000      type: vlan 10   mtu: 1500       device: ixgbe0
ixgbe1          type: non-vlan  mtu: 1500       device: ixgbe1
ixgbe10001      type: vlan 10   mtu: 1500       device: ixgbe1

 

Make the configuration persistent across reboot by creating the following files in /etc:

(root)# # cat /etc/hostname.ixgbe10001
192.168.10.10/24 group vlan10_ipmp0 netmask + broadcast + up


(root)# cat /etc/hostname.ixgbe10000
group vlan10_ipmp0 standby up

 An IP address from the VLAN subnet can now be assigned to the active interface:

(root)# ifconfig ixgbe10001 192.168.10.10/24 up


(root)# ifconfig ixgbe10001
ixgbe10001: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 11
        inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
        groupname vlan10_ipmp0
        ether 0:1b:21:c8:60:61 

(root)# ifconfig ixgbe10000
ixgbe10000: flags=269000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,STANDBY,INACTIVE,CoS> mtu 0 index 12
        inet 0.0.0.0 netmask 0
        groupname vlan10_ipmp0
        ether 0:1b:21:c8:60:60

 

The IPMP failover mechanism can be checked with if_mpadm: detach the active interface and check the result:

(root)# # if_mpadm -d ixgbe10001


(root)# ifconfig -a | grep ixgbe
...
ixgbe10000: flags=221000842<BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,CoS> mtu 1500 index 14
ixgbe10000:1: flags=221000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,CoS> mtu 1500 index 14
ixgbe10001: flags=289000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,OFFLINE,CoS> mtu 0 index 13


(root)# ifconfig -a ixgbe10000:1
ifconfig: ixgbe10000:1: bad address
bash-3.2# ifconfig ixgbe10000:1
ixgbe10000:1: flags=221000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,CoS> mtu 1500 index 14
        inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255

The IP address initially assigned to ixgbe10001 link failed over ixgbe10000 through the logical instance ixgbe10000:1.

To get back to the initial state, reattach ixgbe10001:

(root)# if_mpadm -r ixgbe10001


(root)# ifconfig ixgbe10001
ixgbe10001: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 13
        inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
        groupname vlan10_ipmp0
        ether 0:1b:21:c8:60:61
(root)# ifconfig -a | grep ixgbe10000
ixgbe10000: flags=269000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,STANDBY,INACTIVE,CoS> mtu 0 index 14

The IP address is back on ixgbe10001 and the logical instance on the standby link - ixgbe10000:1 - has been removed.

Configuring Clusterware, Listener And Database With The VLAN

If an additional VLAN is configured in a DB Domain or Zone some additional work is required for the database to receive requests on this VLAN. The section below covers these steps for  a 2-node RAC database (i.e. two DB Domains or two DB Zones). The following assumptions are made:

  1. The VLAN ID is 10. The VLAN IP subnet is 192.168.10.0/24
  2. The two nodes are named client01-vlan10 and client02-vlan10 on the VLAN subnet, using IP addresses 192.168.10.11 and 192.168.10.12

Ensure that the node names and IP addresses of the two nodes on the VLAN are registered in Name Services or in /etc/hosts

(root)# nslookup client01-vlan10
Server:         w.x.y.z
Address:       w.x.y.z#53

** server can't find client01-vlan10: NXDOMAIN

(root)# getent hosts client01-vlan10
192.168.10.11   client01-vlan10 client01-vlan10.us.oracle.com

In this example the name of the first node is registered in /etc/hosts. Repeat this check for the second node.

Register The VLAN Subnet In The Clusterware

Connect to the first RAC node as oracleGI.

Ensure that the IPMP group set up previously is visible to the Clusterware. Run the command(s) form the bin directory in the  Grid home t(the usual Grid home is /u01/app/11.2.x.y/grid/bin):

(oracleGI)$ oifcfg iflist
bondeth0  10.x.y.0
bondmgt0  10.x.z.0
bondib0  192.168.8.0
stor_ipmp0  192.168.28.0
vlan10_ipmp0  192.168.10.0

vlan1_ipmp0 is visible to the Clusterware. Good.

Check the network(s) already configured in Clusterware. Only one network (network number 1) exists for most default installations:

$ srvctl config network
Network exists: 1/10.129.112.0/255.255.240.0/bondeth0, type static

For the next few steps, become root or you will see an error message - PRCN-2018 : Current user oracle is not a privileged user.

Register the VLAN subnet in the Clusterware using commands for the Grid home:

(root)# srvctl add network -k 10 -S 192.168.10.0/255.255.255.0/vlan10_ipmp0 -w static -v
(root)# crsctl start res ora.net10.network
(root)# srvctl config network
Network exists: 1/10.129.112.0/255.255.240.0/bondeth0, type static
Network exists: 10/192.168.10.0/255.255.255.0/vlan10_ipmp0, type static

 

Each RAC node uses a single Virtual IP (VIP) on the VLAN for a total of two VIPs. In this example the VIPs are named client01-vlan10-vip and client02-vlan10-vip using IP addresses 192.168.10.13 and 192.168.10.14:

As root on both RAC nodes, declare the VIPs in /etc/hosts:

192.168.10.11 client01-vlan10 client01-vlan10.us.oracle.com
192.168.10.12 client02-vlan10 client02-vlan10.us.oracle.com
192.168.10.13 client01-vlan10-vip
192.168.10.14 client02-vlan10-vip

 

Declare the VIPs  in the Clusterware (using commands from Grid home):

(root)# srvctl add vip -n node01 -A client01-vlan10-vip/255.255.255.0/vlan10_ipmp0 -k 10
(root)# srvctl add vip -n node02 -A client02-vlan10-vip/255.255.255.0/vlan10_ipmp0 -k 10
(root)# srvctl config vip -n node1
VIP exists: /client01-vlan10-vip/192.168.10.13/192.168.10.0/255.255.255.0/vlan10_ipmp0, hosting node node01
VIP exists: /node1-vip/10.x.y.z/10.x.y.0/255.255.255.0/bondeth0, hosting node node01

 

Then, as oracleGI, start the VIPs. It is a good practice to start the VIPs with a non-root user:

(oracleGI)$ srvctl start vip -i client01-vlan10-vip
(oracleGI)$ srvctl start vip -i client02-vlan10-vip
(oracleGI)$ srvctl status vip -n node01
VIP client01-vlan10-vip is enabled
VIP client01-vlan10-vip is running on node: node01
VIP node01-vip is enabled
VIP node01-vip is running on node: node01
(oracleGI)$ srvctl status vip -n node02
VIP client02-vlan10-vip is enabled
VIP client02-vlan10-vip is running on node: node02
VIP node02-vip is enabled
VIP node02-vip is running on node: node02

 
Once the VIPs are in place, create a listener in the Clusterware for the VLAN (using commands from Grid home). The listener is named LIST_VLAN10, the VLAN is identified by its network number (10):

(oracleGI)$ srvctl add listener -l LIST_VLAN10 -p 1521 -k 10 -s
(oracleGI)$ srvctl start listener -l LIST_VLAN10

(oracleGI)$ srvctl status listener -l LIST_VLAN10
Listener LIST_VLAN10 is enabled
Listener LIST_VLAN10 is running on node(s): node01,node02

 

As oracleDB, check that the listener is properly registered by running the following commands from the Grid home:

(oracleDB)$ lsnrctl status LIST_VLAN10

LSNRCTL for Solaris: Version 11.2.0.4.0 - Production on 05-OCT-2015 23:59:09

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LIST_VLAN10)))
STATUS of the LISTENER
------------------------
Alias                     LIST_VLAN10
Version                   TNSLSNR for Solaris: Version 11.2.0.4.0 - Production
Start Date                30-SEP-2015 06:59:02
Uptime                    5 days 17 hr. 0 min. 7 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/11.2.0.4/grid/network/admin/listener.ora
Listener Log File         /u01/app/11.2.0.4/grid/log/diag/tnslsnr/node01/list_vlan10/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LIST_VLAN10)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.13)(PORT=1521)))
The listener supports no services
The command completed successfully

 

The above output confirms that the listener is running  yet "The listener supports no services" means that some database instance(s) still has to be configured to use this listener.

Configure The Database To Use the VLAN Listener

Get the database name with the srvctl command. This value is assigned to SERVICE_NAME in tnsnames.ora:


(oracleDB)$ srvctl config database
  dbm01

As oracleDB on node1, edit tnsnames.ora located in the network/admin directory of the Grid home.

Note: If the VLAN listener is only used with a specific standalone database then the tnsnames.ora from the Oracle home must be modified instead.

Use vi 'set list' command to view invisible characters. These can create problems during
the next step (executing sql statement) and should be removed. Add the following lines:

## BEGIN
DBM01_VLAN =
(DESCRIPTION =
        (LOAD_BALANCE=on)
        (ADDRESS = (PROTOCOL = TCP)(HOST = client01-vlan10-vip)(PORT = 1521))
        (ADDRESS = (PROTOCOL = TCP)(HOST = client02-vlan10-vip)(PORT = 1521))
        (CONNECT_DATA =
                (SERVER = DEDICATED)
                (SERVICE_NAME = dbm01)
        )
)

LIST_VLAN10_REMOTE =
(DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =  client02-vlan10-vip)(PORT = 1521))
)

LIST_VLAN10_LOCAL =
(DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =  client01-vlan10-vip)(PORT = 1521))
)

## END

 

 As oracleDB on node2, edit tnsnames.ora located in the network/admin directory of the Grid home. Add the following lines:

## BEGIN
DBM01_VLAN =
(DESCRIPTION =
        (LOAD_BALANCE=on)
        (ADDRESS = (PROTOCOL = TCP)(HOST = client01-vlan10-vip)(PORT = 1521))
        (ADDRESS = (PROTOCOL = TCP)(HOST = client02-vlan10-vip)(PORT = 1521))
        (CONNECT_DATA =
                (SERVER = DEDICATED)
                (SERVICE_NAME = dbm01)
        )
)

LIST_VLAN10_REMOTE =
(DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =  client01-vlan10-vip)(PORT = 1521))
)

LIST_VLAN10_LOCAL =
(DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =  client02-vlan10-vip)(PORT = 1521))
)

## END

 

 As oracleDB on both nodes, modify the database to use the VLAN listener. Make sure to set ORACLE_SID to the proper value before running sqlplus:

(oracleDB@node01)$ export ORACLE_SID=dbm011
(oracleDB@node01)$ sqlplus / as sysdba
SQL> alter system set listener_networks='((NAME=network10)(LOCAL_LISTENER=LIST_VLAN10_LOCAL)(REMOTE_LISTENER=LIST_VLAN10_REMOTE))' scope=both;

(oracleDB@node02)$ export ORACLE_SID=dbm012
(oracleDB@node02)$ sqlplus / as sysdba
SQL> alter system set listener_networks='((NAME=network10)(LOCAL_LISTENER=LIST_VLAN10_LOCAL)(REMOTE_LISTENER=LIST_VLAN10_REMOTE))' scope=both;

  
As oracleDB on node01, restart LIST_VLAN10 and check its status:
(oracleDB)$ srvctl stop listener -l LIST_VLAN10
(oracleDB)$ srvctl start listener -l LIST_VLAN10
(oracleDB)$ export TNS_ADMIN=/u01/app/11.2.x.y/grid/network/admin
(oracleDB)$ lsnrctl status LIST_VLAN10

LSNRCTL for Solaris: Version 11.2.0.4.0 - Production on 06-OCT-2015 03:34:40

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LIST_VLAN10)))
STATUS of the LISTENER
------------------------
Alias                     LIST_VLAN10
Version                   TNSLSNR for Solaris: Version 11.2.0.4.0 - Production
Start Date                06-OCT-2015 03:33:56
Uptime                    0 days 0 hr. 0 min. 43 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/11.2.0.4/grid/network/admin/listener.ora
Listener Log File         /u01/app/11.2.0.4/grid/log/diag/tnslsnr/node01/list_vlan10/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LIST_VLAN10)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.13)(PORT=1521)))
Services Summary...
Service "dbm01" has 2 instance(s).
  Instance "dbm011", status READY, has 1 handler(s) for this service...
  Instance "dbm012", status READY, has 1 handler(s) for this service...
The command completed successfully

 

References

<NOTE:1955833.1> - SuperCluster: Creating a DB listener on Infiniband

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback