![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||||||||||||||||||||||
Solution Type Technical Instruction Sure Solution 2063071.1 : SuperCluster: Enabling 802.1Q VLAN
802.1Q Virtual LANs (VLANs) can be configured on the SuperCluster 10Gb Ethernet Client Access Network. VLAN is a standard technology for network segregation. Using VLAN in conjunction with Solaris Zones is a good starting point to create multiple segregated environments on a single SuperCluster Engineered System. In this Document
Applies to:Oracle SuperCluster T5-8 Full Rack - Version All Versions and laterSolaris SPARC Operating System - Version 10 1/13 U11 to 11.2 [Release 10.0 to 11.0] Oracle SuperCluster M6-32 Hardware - Version All Versions and later SPARC SuperCluster T4-4 - Version All Versions and later Oracle SuperCluster T5-8 Hardware - Version All Versions and later Oracle Solaris on SPARC (64-bit) GoalThis document explains how to create VLANs on the SuperCluster Client Access Network, in Application and Database Domains, and in Application and Database Zones. SolutionBackgroundFor payload traffic, the SuperCluster is accessed through a 10Gb Ethernet Client Access Network. If there is a need to access the SuperCluster from multiple subnets, Virtual LANs (VLANs) can be used. VLANs can be configured during the initial install and configuration performed by Oracle Advanced Customer Services (ACS). This document explains how to create an additional VLAN once the SuperCluster is already configured. VLAN (a.k.a. VLAN tagging) is a standard Ethernet technology for network segregation. Using VLAN in conjunction with Solaris Zones is a good starting point to create multiple segregated environments on a single SuperCluster. AssumptionsThe following applies to the document:
Note: The steps and examples use a convention for the OS prompt to indicate which user runs the different commands. When the prompt is (root)#, the command is run as root. When the prompt is (oracleGI)$, the command is run as the Oracle Grid Infrastructure software owner account. When the prompt is (oracleDB)$, the command is run as the Oracle RDBMS software owner account. In many cases, the Oracle Grid Infrastructure and RDBMS owners may be the same - often "oracle" for both.
Creating a VLAN in a Solaris 11 Dedicated DomainDedicated Domains are created by ACS during the initial install and configuration of the SuperCluster - as opposed to IO Domains that can be created by the customer at any time during the SuperCluster life-cycle (IO Domains require SuperCluster software 2.0 or higher). A dedicated Domain that hosts an Oracle Database using the storage cells is a DB Domain - as opposed to a dedicated App Domain that cannot access the storage cells over the Infiniband interconnect. The following instructions apply for both DB and App Domains. Creating an additional VLAN in a Solaris 11 Dedicated Domain consists of the following steps:
On SuperCluster network interfaces are paired in IP Multipathing (IPMP) groups that provide redundancy. For the Client Access Network these groups are named sc_ipmp0 in App Domains and bondeth0 in DB Domains. Note: On SuperCluster, for detecting interface failures the IPMP link-based detection mode is used (as opposed to the probe-based detection mode).
Identify the 10GbE physical interfaces used by this group. As an example, in a DB Domain: (root)# ipmpstat -g -o GROUP,INTERFACES | grep bondeth0
bondeth0 net4 (net5) The bondeth0 group uses interfaces net4 and net5. The first is active while the second is standby. To ease load balancing between the two interfaces, the VLAN IPMP group can be configured the opposite way, with an active link on net5 and a standby on net4. Create a VLAN with ID 10 on each interface: (root)# dladm create-vlan -l net4 -v 10 net4vlan10
(root)# dladm create-vlan -l net5 -v 10 net5vlan10 (root)# dladm show-vlan LINK VID SVID PVLAN-TYPE FLAGS OVER net4vlan10 10 -- -- ----- net4 net5vlan10 10 -- -- ----- net5 That's it for the network layer 2 (Ethernet VLAN). Now the layer 3 (IP, IPMP). Create an IP interface on each of the VLAN: (root)# ipadm create-ip net4vlan10
(root)# ipadm create-ip net5vlan10 (root)# ipadm show-if IFNAME CLASS STATE ACTIVE OVER ... net4vlan10 ip down no -- net5vlan10 ip down no --
Create an vlan10_ipmp0 IPMP group and add the IP interfaces to it: (root)# ipadm create-ipmp vlan10_ipmp0
(root)# ipadm add-ipmp -i net4vlan10,net5vlan10 vlan10_ipmp0 (root)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0 vlan10_ipmp0 net4vlan10 (net5vlan10) Make net5vlan10 the active interface, net4vlan10 the standby one: (root)# ipadm set-ifprop -p standby=off -m ip net5vlan10
(root)# ipadm set-ifprop -p standby=on -m ip net4vlan10 (root)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0 vlan10_ipmp0 net5vlan10 (net4vlan10)
An IP address from the VLAN subnet can now be assigned to the IPMP group: (root)# ipadm create-addr -T static -a 192.168.10.10/24 vlan10_ipmp0/v4
(root)# ipadm show-addr | grep vlan10 vlan10_ipmp0/v4 static ok 192.168.10.10/24 Creating a VLAN in an IO DomainIO Domains can be created at any time during the SuperCluster life-cycle. They are created with the IO Domain Creation Tool (a.k.a. IODCT) that is part of SuperCluster software 2.0 or higher. As of today IODCT does not cover VLAN creation. The VLAN creation is performed once the Domain is created. IO Domains run Solaris 11. An IO Domain that hosts an Oracle Database using the storage cells is a DB Domain - as opposed to an App IO Domain that cannot access the storage cells over the Infiniband interconnect. The following instructions apply for both DB and App Domains, and whether the VLAN is created directly in the IO Domain or in a Solaris Zone that runs in the IO Domain. IO Domains are connected to the Client Access Network through SR-IOV Virtual Functions (VFs). These VFs must be modified to enable not only the VLAN ID but also the creation of VLAN interfaces on top of them in the IO Domain. Creating a VLAN in an IO Domain consists of the following steps:
Steps 4. and 5. are not required if the VLAN is to be created in a Solaris Zone inside the IO Domain. Indentify the Primary Domain hosting the IO Domain (root)# virtinfo -a
Domain role: LDoms guest I/O Domain name: ssccn1-io-dom01 Domain UUID: 074846e8-246e-4f16-8e21-e4ae3183f6bd Control domain: primdom01 Chassis serial#: AK88888887 In this example, the Primary Domain is named primdom01. Note that this is a Domain name and that it may not match the hostname of the Primary Domain. If need be, refer to the configuration worksheet of the SuperCluster to find the Primary Domain hostname and connect to it. Indentify the two VFs that provide access to the Client Access Network Connect to the Primary Domain and run the ldm list-io command. Search for the IO Domain name in the command's output: (root@primdom01)# ldm list-io | grep ssccn1-io-dom01 | grep VNET
/SYS/RCSA/PCIE6/IOVNET.PF0.VF2 VF pci_12 ssccn1-io-dom01 /SYS/RCSA/PCIE6/IOVNET.PF1.VF2 VF pci_12 ssccn1-io-dom01 The IO Domain is using 2 VFs, both named VF2, each one seating on a different physical functions PF0 and PF1.
(root@primdom01)# ldm stop ssccn1-io-dom01 (root@primdom01)# ldm set-io vid=10 alt-mac-addrs=auto,auto,auto /SYS/RCSA/PCIE6/IOVNET.PF0.VF2 # Check result
Note that the vid and the alt-mac-addrs attributes are set on the VFs. The second enables the creation of VLAN of VNIC on top of the VF by reserving slots for additional MAC addresses: one slot is reserved for each 'auto' keyword. In our example 3 slots are reserved. Each can be used to create a VLAN in the IO Domain, or a VLAN in a Solaris Zone hosted by the IO Domain. So in this example, 3 Solaris Zones connected to VLAN(s) could be created. From now on, the instructions are similar to the ones used to create a VLAN in a Dedicated Domain. Identify the interfaces to the Client Access Network On SuperCluster network interfaces are paired in IP Multipathing (IPMP) groups that provide network redundancy. For the Client Access Network these groups are named sc_ipmp0 in App Domains and bondeth0 in DB Domains. These groups can be used to identify the VFs. As an example, in an App IO Domain: (root)# ipmpstat -g -o GROUP,INTERFACES | grep sc_ipmp0
sc_ipmp0 net0 (net1) The sc_ipmp0 group uses interfaces net0 (active) and net1 (standby). These are the VFs that have just been modified. Create the VLAN 10 on top of the VFs: (root)# dladm create-vlan -l net0 -v 10 net0vlan10 (root)# dladm show-vlan That's it for the network layer 2 (Ethernet VLAN). Now the layer 3 (IP, IPMP). Create an IP interface on each of the VLAN: (root)# ipadm create-ip net0vlan10
(root)# ipadm create-ip net1vlan10 (root)# ipadm show-if IFNAME CLASS STATE ACTIVE OVER ... net0vlan10 ip down no -- net1vlan10 ip down no -- Create an vlan10_ipmp0 IPMP group and add the IP interfaces to it: (root)# ipadm create-ipmp vlan10_ipmp0
(root)# ipadm add-ipmp -i net0vlan10,net1vlan10 vlan10_ipmp0 (root)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0 vlan10_ipmp0 net0vlan10 (net1vlan10) Make net1vlan10 the active interface, net0vlan10 the standby one: (root)# ipadm set-ifprop -p standby=off -m ip net1vlan10
(root)# ipadm set-ifprop -p standby=on -m ip net0vlan10 (root)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0 vlan10_ipmp0 net1vlan10 (net0vlan10) An IP address from the VLAN subnet can now be assigned to the IPMP group: (root)# ipadm create-addr -T static -a 192.168.10.10/24 vlan10_ipmp0/v4
(root)# ipadm show-addr | grep vlan10 vlan10_ipmp0/v4 static ok 192.168.10.10/24 Creating a VLAN in a Solaris 11 ZoneCreating an additional VLAN in a Solaris 11 Zone consists of the following steps:
If the Solaris Zone is created in an IO Domain, make sure that VLAN is enabled in the IO Domain first. Please refer to the section Creating a VLAN in an IO Domain. On SuperCluster, network interfaces are paired in IP Multipathing (IPMP) groups. These groups provide network redundancy. For the Client Access Network they are named sc_ipmp0 in App Domains and bondeth0 in DB Domains. Identify the 10GbE physical interfaces used by this group. As an example, in an App Domain: (root)# ipmpstat -g -o GROUP,INTERFACES | grep sc_ipmp0
sc_ipmp0 net0 (net1) The sc_ipmp0 group uses interfaces net0 and net1. The first is active while the second is standby. To ease load balancing between the two interfaces, the VLAN IPMP group in the Zone can be configured the opposite way, with an active link on net1 and a standby on net0. Using anet sections, declare two Virtual NIC (VNIC) in the Zone configuration, namely net0vlan10 and net1vlan11. The physical interface - on top of which a VNIC is located - is declared using the lower-link attribute. Ensure that the VNICs are actually VLAN-tagged by setting their vlan-id attribute. In this example the Zone is named pg1Zone1: # zonecfg -z pg1Zone1
zonecfg:pg1Zone1> add anet zonecfg:pg1Zone1:anet> set linkname=net0vlan10 zonecfg:pg1Zone1:anet> set lower-link=net0 zonecfg:pg1Zone1:anet> set vlan-id=10 zonecfg:pg1Zone1:anet> end zonecfg:pg1Zone1> add anet zonecfg:pg1Zone1:anet> set linkname=net1vlan10 zonecfg:pg1Zone1:anet> set lower-link=net1 zonecfg:pg1Zone1:anet> set vlan-id=10 zonecfg:pg1Zone1:anet> end zonecfg:pg1Zone1> exit
If the Zone is a DB Zone with Clusterware already running, before rebooting the zone, login in it with zlogin to stop the Clusterware and disable its automatic restart. (root)# zoneadm -z pg1Zone1 reboot
(root)# zlogin pg1Zone1 [Connected to zone 'pg1Zone1' pts/4] Oracle Corporation SunOS 5.11 11.1 April 2014 You have new mail. (root@pg1Zone1)# crsctl stop crs -f (root#pg1Zone1)# crsctl disable crs
Reboot the Zone with zoneadm and login in it with zlogin: (root)# zoneadm -z pg1Zone1 reboot
(root)# zlogin pg1Zone1 [Connected to zone 'pg1Zone1' pts/4] Oracle Corporation SunOS 5.11 11.1 April 2014 You have new mail. (root@pg1Zone1)# dladm show-link net0vlan10 vnic 1500 up ? net1vlan10 vnic 1500 down ? The two VNICs are now visible in the Zone. Still in the Zone as root, create a vlan10_ipmp0 IPMP group and add the VNICs to the group: (root@pg1Zone1)# ipadm create-ipmp vlan10_ipmp0
(root@pg1Zone1)# ipadm create-ip net0vlan10 (root@pg1Zone1)# ipadm create-ip net1vlan10 (root@pg1Zone1)# ipadm add-ipmp -i net0vlan10,net1vlan10 vlan10_ipmp0 (root@pg1Zone1)# ipadm set-ifprop -p standby=on -m ip net0vlan10 (root@pg1Zone1)# ipmpstat -g -o GROUP,INTERFACES | grep vlan10_ipmp0 vlan10_ipmp0 net1vlan10 [net0vlan10]
An IP address from the VLAN subnet can now be assigned to the IPMP group: (root@pg1Zone1)# ipadm create-addr -T static -a 192.168.10.10/24 vlan10_ipmp0/v4
(root@pg1Zone1)# ipadm show-addr ADDROBJ TYPE STATE ADDR ... vlan10_ipmp0/v4 static ok 192.168.10.10/24 If the Zone is a DB Zone, restart the Clusterware: (root#pg1Zone1)# crsctl enable crs
(root#pg1Zone1)# crsctl start crs Creating a VLAN in a Solaris 10 Dedicated DomainCreating an additional VLAN in a Solaris 10 Dedicated Domain consists of the following steps:
With Solaris 10 the two physical interfaces for the Client Access Network are identified as follow: (root)# dladm show-link | grep ixgbe
ixgbe0 type: non-vlan mtu: 1500 device: ixgbe0 ixgbe1 type: non-vlan mtu: 1500 device: ixgbe1 ixgdbe is name of the 10GbE device driver. In this example the two physical interfaces are listed in column one: ixgbe0 and ixgbe1. If more than two interfaces are listed, check which belong to the sc_ipmp0 IPMP group. This is the IPMP group that provides redundancy for the Client Access Network in App Domains: (root)# ifconfig ixgbe0 (root)# ifconfig ixgbe1 In this example both interfaces have sc_ipmp0 as their groupname, as expected. Also, ixgbe1 is in standby mode while ixgbe0 is active. To ease load balancing between the two interfaces, the VLAN IPMP group can be configured the opposite way, with an active link on ixgbe1 and a standby on ixgbe0. The next step is to create VLAN interfaces on both 10GbE links and include them in a vlan10_ipmp0 IPMP group. (root)# ifconfig ixgbe10001 plumb
(root)# ifconfig ixgbe10000 plumb (root)# ifconfig ixgbe10001 netmask + broadcast + group vlan10_ipmp0 (root)# ifconfig ixgbe10000 standby group vlan10_ipmp0 (root)# dladm show-link | grep ixgbe
ixgbe0 type: non-vlan mtu: 1500 device: ixgbe0 ixgbe10000 type: vlan 10 mtu: 1500 device: ixgbe0 ixgbe1 type: non-vlan mtu: 1500 device: ixgbe1 ixgbe10001 type: vlan 10 mtu: 1500 device: ixgbe1
Make the configuration persistent across reboot by creating the following files in /etc: (root)# # cat /etc/hostname.ixgbe10001
An IP address from the VLAN subnet can now be assigned to the active interface: (root)# ifconfig ixgbe10001 192.168.10.10/24 up
(root)# ifconfig ixgbe10000
The IPMP failover mechanism can be checked with if_mpadm: detach the active interface and check the result: (root)# # if_mpadm -d ixgbe10001
The IP address initially assigned to ixgbe10001 link failed over ixgbe10000 through the logical instance ixgbe10000:1. To get back to the initial state, reattach ixgbe10001: (root)# if_mpadm -r ixgbe10001
The IP address is back on ixgbe10001 and the logical instance on the standby link - ixgbe10000:1 - has been removed. Configuring Clusterware, Listener And Database With The VLANIf an additional VLAN is configured in a DB Domain or Zone some additional work is required for the database to receive requests on this VLAN. The section below covers these steps for a 2-node RAC database (i.e. two DB Domains or two DB Zones). The following assumptions are made:
Ensure that the node names and IP addresses of the two nodes on the VLAN are registered in Name Services or in /etc/hosts (root)# nslookup client01-vlan10
Server: w.x.y.z Address: w.x.y.z#53 ** server can't find client01-vlan10: NXDOMAIN (root)# getent hosts client01-vlan10 192.168.10.11 client01-vlan10 client01-vlan10.us.oracle.com In this example the name of the first node is registered in /etc/hosts. Repeat this check for the second node. Register The VLAN Subnet In The ClusterwareConnect to the first RAC node as oracleGI. Ensure that the IPMP group set up previously is visible to the Clusterware. Run the command(s) form the bin directory in the Grid home t(the usual Grid home is /u01/app/11.2.x.y/grid/bin): (oracleGI)$ oifcfg iflist
bondeth0 10.x.y.0 bondmgt0 10.x.z.0 bondib0 192.168.8.0 stor_ipmp0 192.168.28.0 vlan10_ipmp0 192.168.10.0 vlan1_ipmp0 is visible to the Clusterware. Good. Check the network(s) already configured in Clusterware. Only one network (network number 1) exists for most default installations: $ srvctl config network
Network exists: 1/10.129.112.0/255.255.240.0/bondeth0, type static For the next few steps, become root or you will see an error message - PRCN-2018 : Current user oracle is not a privileged user. Register the VLAN subnet in the Clusterware using commands for the Grid home: (root)# srvctl add network -k 10 -S 192.168.10.0/255.255.255.0/vlan10_ipmp0 -w static -v
(root)# crsctl start res ora.net10.network
(root)# srvctl config network Network exists: 1/10.129.112.0/255.255.240.0/bondeth0, type static Network exists: 10/192.168.10.0/255.255.255.0/vlan10_ipmp0, type static
Each RAC node uses a single Virtual IP (VIP) on the VLAN for a total of two VIPs. In this example the VIPs are named client01-vlan10-vip and client02-vlan10-vip using IP addresses 192.168.10.13 and 192.168.10.14: As root on both RAC nodes, declare the VIPs in /etc/hosts: 192.168.10.11 client01-vlan10 client01-vlan10.us.oracle.com
192.168.10.12 client02-vlan10 client02-vlan10.us.oracle.com 192.168.10.13 client01-vlan10-vip 192.168.10.14 client02-vlan10-vip
Declare the VIPs in the Clusterware (using commands from Grid home): (root)# srvctl add vip -n node01 -A client01-vlan10-vip/255.255.255.0/vlan10_ipmp0 -k 10
(root)# srvctl add vip -n node02 -A client02-vlan10-vip/255.255.255.0/vlan10_ipmp0 -k 10 (root)# srvctl config vip -n node1 VIP exists: /client01-vlan10-vip/192.168.10.13/192.168.10.0/255.255.255.0/vlan10_ipmp0, hosting node node01 VIP exists: /node1-vip/10.x.y.z/10.x.y.0/255.255.255.0/bondeth0, hosting node node01
Then, as oracleGI, start the VIPs. It is a good practice to start the VIPs with a non-root user: (oracleGI)$ srvctl start vip -i client01-vlan10-vip
(oracleGI)$ srvctl start vip -i client02-vlan10-vip (oracleGI)$ srvctl status vip -n node01
VIP client01-vlan10-vip is enabled VIP client01-vlan10-vip is running on node: node01 VIP node01-vip is enabled VIP node01-vip is running on node: node01 (oracleGI)$ srvctl status vip -n node02
VIP client02-vlan10-vip is enabled VIP client02-vlan10-vip is running on node: node02 VIP node02-vip is enabled VIP node02-vip is running on node: node02 (oracleGI)$ srvctl add listener -l LIST_VLAN10 -p 1521 -k 10 -s
(oracleGI)$ srvctl start listener -l LIST_VLAN10 (oracleGI)$ srvctl status listener -l LIST_VLAN10 Listener LIST_VLAN10 is enabled Listener LIST_VLAN10 is running on node(s): node01,node02
As oracleDB, check that the listener is properly registered by running the following commands from the Grid home: (oracleDB)$ lsnrctl status LIST_VLAN10
LSNRCTL for Solaris: Version 11.2.0.4.0 - Production on 05-OCT-2015 23:59:09 Copyright (c) 1991, 2013, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LIST_VLAN10))) STATUS of the LISTENER ------------------------ Alias LIST_VLAN10 Version TNSLSNR for Solaris: Version 11.2.0.4.0 - Production Start Date 30-SEP-2015 06:59:02 Uptime 5 days 17 hr. 0 min. 7 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/11.2.0.4/grid/network/admin/listener.ora Listener Log File /u01/app/11.2.0.4/grid/log/diag/tnslsnr/node01/list_vlan10/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LIST_VLAN10))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.13)(PORT=1521))) The listener supports no services The command completed successfully
The above output confirms that the listener is running yet "The listener supports no services" means that some database instance(s) still has to be configured to use this listener. Configure The Database To Use the VLAN ListenerGet the database name with the srvctl command. This value is assigned to SERVICE_NAME in tnsnames.ora:
As oracleDB on node1, edit tnsnames.ora located in the network/admin directory of the Grid home. Note: If the VLAN listener is only used with a specific standalone database then the tnsnames.ora from the Oracle home must be modified instead.
Use vi 'set list' command to view invisible characters. These can create problems during ## BEGIN
DBM01_VLAN = (DESCRIPTION = (LOAD_BALANCE=on) (ADDRESS = (PROTOCOL = TCP)(HOST = client01-vlan10-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = client02-vlan10-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dbm01) ) ) LIST_VLAN10_REMOTE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = client02-vlan10-vip)(PORT = 1521)) ) LIST_VLAN10_LOCAL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = client01-vlan10-vip)(PORT = 1521)) ) ## END
As oracleDB on node2, edit tnsnames.ora located in the network/admin directory of the Grid home. Add the following lines: ## BEGIN
DBM01_VLAN = (DESCRIPTION = (LOAD_BALANCE=on) (ADDRESS = (PROTOCOL = TCP)(HOST = client01-vlan10-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = client02-vlan10-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dbm01) ) ) LIST_VLAN10_REMOTE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = client01-vlan10-vip)(PORT = 1521)) ) LIST_VLAN10_LOCAL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = client02-vlan10-vip)(PORT = 1521)) ) ## END
As oracleDB on both nodes, modify the database to use the VLAN listener. Make sure to set ORACLE_SID to the proper value before running sqlplus: (oracleDB@node01)$ export ORACLE_SID=dbm011
(oracleDB@node01)$ sqlplus / as sysdba SQL> alter system set listener_networks='((NAME=network10)(LOCAL_LISTENER=LIST_VLAN10_LOCAL)(REMOTE_LISTENER=LIST_VLAN10_REMOTE))' scope=both; (oracleDB@node02)$ export ORACLE_SID=dbm012 (oracleDB@node02)$ sqlplus / as sysdba SQL> alter system set listener_networks='((NAME=network10)(LOCAL_LISTENER=LIST_VLAN10_LOCAL)(REMOTE_LISTENER=LIST_VLAN10_REMOTE))' scope=both;
References<NOTE:1955833.1> - SuperCluster: Creating a DB listener on InfinibandAttachments This solution has no attachment |
||||||||||||||||||||||||||||||||||
|