Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1536860.1
Update Date:2018-01-07
Keywords:

Solution Type  Technical Instruction Sure

Solution  1536860.1 :   How to set up probe-based failure detection in IPMP on Oracle Solaris Systems  


Related Items
  • Exalogic Elastic Cloud X3-2 Full Rack
  •  
  • Oracle Exalogic Elastic Cloud X2-2 Qtr Rack
  •  
  • Oracle Exalogic Elastic Cloud X2-2 Half Rack
  •  
  • Oracle Exalogic Elastic Cloud X2-2 Full Rack
  •  
  • Exalogic Elastic Cloud X3-2 Half Rack
  •  
  • Exalogic Elastic Cloud X3-2 Quarter Rack
  •  
Related Categories
  • PLA-Support>Sun Systems>SAND>Network>SN-SND: Sun Network Protocols and Routing
  •  


Steps to configure probe based IPMP on Oracle Solaris Engineered Systems.

In this Document
Goal
Solution


Applies to:

Oracle Exalogic Elastic Cloud X2-2 Qtr Rack - Version All Versions to All Versions [Release All Releases]
Exalogic Elastic Cloud X3-2 Quarter Rack - Version All Versions to All Versions [Release All Releases]
Exalogic Elastic Cloud X3-2 Half Rack - Version All Versions to All Versions [Release All Releases]
Exalogic Elastic Cloud X3-2 Full Rack - Version All Versions to All Versions [Release All Releases]
Oracle Exalogic Elastic Cloud X2-2 Half Rack - Version All Versions to All Versions [Release All Releases]
Oracle Solaris on x86-64 (64-bit)
Oracle Solaris on x86 (32-bit)

Goal

 Steps to configure probe-based failure detection in IPMP on Oracle systems running Oracle Solaris 11 Express and above.

Solution

To ensure continuous availability of the network to send or receive traffic, IPMP performs failure detection on the IPMP group's underlying IP interfaces. Failed interfaces remain unusable until these are repaired. Remaining active interfaces continue to function while any existing standby interfaces are deployed as needed.

A group failure occurs when all interfaces in an IPMP group appear to fail at the same time. In this case, no underlying interface is usable. Also, when all the target systems fail at the same time and probe-based failure detection is enabled, the in.mpathd daemon flushes all of its current target systems and probes for new target systems.

Types of Failure Detection in IPMP

The in.mpathd daemon handles the following types of failure detection:

  •     Link-based failure detection, if supported by the NIC driver
  •     Probe-based failure detection, when test addresses are configured
  •     Detection of interfaces that were missing at boot time

Below are the steps to configure probe-based failure detection in IPMP. More details on configuring probe-based failure detection can be found in Oracle Solaris documentation.

Solaris 11 Express - https://docs.oracle.com/cd/E19963-01/html/821-1458/gfieq.html
Solaris 11 11/11 - https://docs.oracle.com/cd/E23824_01/html/821-1458/gfieq.html
Solaris 11.1 - https://docs.oracle.com/cd/E26502_01/html/E28993/gfieq.html

a. Create data links for IPMP as follows:

create 'hostname.bond0_0' & hostname.bond0_1' under /etc with the following details:
- assign test IP address for the interfaces with 'Nofailover' flag
- ad interfaces to the ipmp group

Example:

# cat /etc/hostname.bond0_0
addif 192.168.10.215/24 -failover up
group ibipmp

# cat /etc/hostname.bond0_1
addif 192.168.10.216/24 -failover up
group ibipmp

b. Add data address to IPMP interface as follows:

create 'hostname.ipmp0' under /etc with following details:
- assign data address to the interfce and up the interface

Example:

# cat /etc/hostname.ipmp0
ipmp group ibipmp 192.168.10.8 up

c. Add the route to the target host, add routes for all target hosts

Example:

# route add -host 192.168.10.19 192.168.10.19 -static

 
d. Modify /etc/default/mpathd as per How to Configure the Behavior of the IPMP Daemon in Oracle Solaris documentation.

# reboot

 
e. After reboot, login and verify ipmp status using 'ipmpstat' command

# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
ipmp0: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 65520 index 2
        inet 192.168.10.8 netmask ffffff00 broadcast 192.168.10.255
        groupname ibipmp
bond0_0: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 65520 index 3
        inet 0.0.0.0 netmask 0
        groupname ibipmp
        ipib 80:0:0:4a:fe:80:0:0:0:0:0:0:0:21:28:0:1:ce:dd:83
bond0_0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 65520 index 3
        inet 192.168.10.215 netmask ffffff00 broadcast 192.168.10.255
bond0_1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 65520 index 4
        inet 0.0.0.0 netmask 0
        groupname ibipmp
        ipib 80:0:0:4b:fe:80:0:0:0:0:0:0:0:21:28:0:1:ce:dd:84
bond0_1:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 65520 index 4
        inet 192.168.10.216 netmask ffffff00 broadcast 192.168.10.255
igb0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
        inet 10.133.41.45 netmask fffff800 broadcast 10.133.47.255
        ether 0:21:28:ef:0:82
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128
igb0: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 1500 index 5
        inet6 ::/0
        ether 0:21:28:ef:0:82

Example:

# ipmpstat -pn
TIME      INTERFACE   PROBE  NETRTT    RTT       RTTAVG    TARGET
11.76s    bond0_0     38     0.26ms    0.33ms    0.43ms    192.168.10.19
13.64s    bond0_1     39     0.23ms    0.39ms    0.51ms    192.168.10.19
28.12s    bond0_0     39     0.27ms    0.35ms    0.42ms    192.168.10.19
28.60s    bond0_1     40     0.32ms    0.37ms    0.49ms    192.168.10.19

# ipmpstat -tn
INTERFACE   MODE      TESTADDR            TARGETS
bond0_1     routes    192.168.10.216      192.168.10.19
bond0_0     routes    192.168.10.215      192.168.10.19

# ipmpstat -in
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bond0_1     yes     ipmp1       -------   up        ok        ok
bond0_0     yes     ipmp1       --mb---   up        ok        ok

 
f. To verify failover, fail an interface:

# if_mpadm -d bond0_1

# ipmpstat -gn
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp1       ibipmp      degraded  100.00s   bond0_0 [bond0_1]
ipmp0       ipmp0       failed    --        --

# ipmpstat -pn
TIME      INTERFACE   PROBE  NETRTT    RTT       RTTAVG    TARGET
4.80s     bond0_0     49     0.20ms    0.28ms    0.43ms    192.168.10.19
22.24s    bond0_0     50     0.29ms    0.37ms    0.42ms    192.168.10.19
35.00s    bond0_0     51     0.22ms    0.30ms    0.41ms    192.168.10.19
49.71s    bond0_0     52     0.23ms    0.31ms    0.39ms    192.168.10.19

 
g. Restore the interface:

# if_mpadm -r bond0_1

# ipmpstat -gn
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp1       ibipmp      ok        100.00s   bond0_1 bond0_0
ipmp0       ipmp0       failed    --        --

# ipmpstat -pn
TIME      INTERFACE   PROBE  NETRTT    RTT       RTTAVG    TARGET
0.35s     bond0_1     50     0.27ms    0.37ms    0.44ms    192.168.10.19
2.40s     bond0_0     56     0.22ms    0.29ms    0.39ms    192.168.10.19
11.96s    bond0_1     51     0.28ms    0.70ms    0.47ms    192.168.10.19
21.33s    bond0_0     57     0.21ms    0.28ms    0.38ms    192.168.10.19

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback