![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||||||||||||||||||
Solution Type Technical Instruction Sure Solution 1953915.1 : Elastic Configuration on Exadata
Introduction In this Document
Applies to:Oracle Exadata Storage Server Software - Version 12.1.2.1.0 and laterOracle Server X5-2L - Version All Versions and later Exadata X4-8 Hardware - Version All Versions and later Oracle Server X5-2 - Version All Versions and later Linux x86-64 GoalThis document provides details on configuring Exadata DB nodes and cells using the elastic configuration process. SolutionIntroductionThis document provides an overview of the elastic configuration process. Elastic configuration is new in Exadata version 12.1.2.1.0, and replaces the older applyconfig in previous versions. Elastic configuration applies to all rack configurations; i.e. racks ordered with a standard number of DB nodes and cells (such as a traditional quarter or half rack,) as well as rack configurations that feature additional DB nodes and cells. This is now the standard methodology for all new deployments, and applies to X5, X5-2L and X4-8b (with X5 storage cells) servers. The same process is also used to add additional DB servers or cells to an existing configuration. Overview of the Elastic Configuration ProcessThe elastic configuration process will allow initial IP addresses to be assigned to database servers and cells, regardless of the exact customer configuration ordered. The customer specific configuration can then be applied to the nodes. Every Exadata system has a pre-defined method for the cabling of nodes to IB switch ports. Therefore, there is a fixed mapping from each node's location in the rack to the ports of the IB switches. Assuming the rack is always populated following this map, a node's rack unit location can be identified by querying the IB fabric to determine the IB switch port the node is connected to. Once found, that information is used to determine the rack unit location of the node. Utilizing this information, nodes can be allocated IP addresses based on their rack unit location; with nodes lower in the rack getting lower IP addresses. Below is a detailed look at each of the steps. Specific detailed steps can be found in the “Detailed Steps” section further down. Detailed Explanation of Steps
Port RU_Loc NodeType 10 17 db elastic 2 2 cell elastic 3 8 cell elastic 4 6 cell elastic 8 16 db elastic 9 18 db elastic
Detailed Example for a New Rack
[root@node1 ~]# ibhosts
Ca : 0x0010e00001486fb8 ports 2 "node10 elasticNode 172.16.2.46,172.16.2.46 ETH0" Ca : 0x0010e00001491228 ports 2 "node9 elasticNode 172.16.2.45,172.16.2.45 ETH0" Ca : 0x0010e000014844f8 ports 2 "node8 elasticNode 172.16.2.44,172.16.2.44 ETH0" Ca : 0x0010e00001488218 ports 2 "node4 elasticNode 172.16.2.40,172.16.2.40 ETH0" Ca : 0x0010e000014908b8 ports 2 "node2 elasticNode 172.16.2.38,172.16.2.38 ETH0" Ca : 0x0010e0000148ca68 ports 2 "node1 elasticNode 172.16.2.37,172.16.2.37 ETH0" Ca : 0x0010e00001485fd8 ports 2 "node3 elasticNode 172.16.2.39,172.16.2.39 ETH0" All nodes should appear in ibhosts with a node description as - "hostname elasticNode IP address ETH0" A further check can be made using "ip addr" to ensure a 172.16 IP address has been assigned to ETH0. Note: ETH0 will be seen in the "ibhosts" output only during initial elastic configuration. Once the elastic configuration process has completed "HCA-1" will be seen. 6. (all dbnodes, only if OVM) switch_to_ovm.sh (systems will reboot)
# /opt/oracle.SupportTools/switch_to_ovm.sh
2014-12-07 11:58:36 -0800 [INFO] Switch to DOM0 system partition /dev/VGExaDb/LVDbSys3 (/dev/mapper/VGExaDb-LVDbSys3) 2014-12-07 11:58:36 -0800 [INFO] Active system device: /dev/mapper/VGExaDb-LVDbSys1 2014-12-07 11:58:36 -0800 [INFO] Active system device in boot area: /dev/mapper/VGExaDb-LVDbSys1 2014-12-07 11:58:36 -0800 [INFO] Set active systen device to /dev/VGExaDb/LVDbSys3 in /boot/I_am_hd_boot 2014-12-07 11:58:36 -0800 [INFO] Reboot has been initiated to switch to the DOM0 system partition
After switching to OVM, the 172.16 IP addresses will be on vmeth0 (rather than ETH0 as before.) [root@node8 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 90:e2:ba:7a:0a:80 brd ff:ff:ff:ff:ff:ff 3: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 90:e2:ba:7a:0a:81 brd ff:ff:ff:ff:ff:ff 4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmeth0 state UP qlen 1000 link/ether 00:10:e0:62:c0:2c brd ff:ff:ff:ff:ff:ff 5: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:10:e0:62:c0:2d brd ff:ff:ff:ff:ff:ff 6: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:10:e0:62:c0:2e brd ff:ff:ff:ff:ff:ff 7: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:10:e0:62:c0:2f brd ff:ff:ff:ff:ff:ff 8: ib0: <BROADCAST,MULTICAST> mtu 2044 qdisc noop state DOWN qlen 1024 link/infiniband 80:00:05:48:fe:80:00:00:00:00:00:00:00:10:e0:00:01:48:44:f9 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff 9: ib1: <BROADCAST,MULTICAST> mtu 2044 qdisc noop state DOWN qlen 1024 link/infiniband 80:00:05:49:fe:80:00:00:00:00:00:00:00:10:e0:00:01:48:44:fa brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff 10: vmeth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:10:e0:62:c0:2c brd ff:ff:ff:ff:ff:ff inet 172.16.1.44/21 brd 172.16.7.255 scope global vmeth0
7. (all dbnodes) reclaimdisks.sh -free -reclaim # /opt/oracle.SupportTools/reclaimdisks.sh -free -reclaim
Model is ORACLE SERVER X5-2 Number of LSI controllers: 1 Physical disks found: 4 (252:0 252:1 252:2 252:3) Logical drives found: 1 Linux logical drive: 0 RAID Level for the Linux logical drive: 5 Physical disks in the Linux logical drive: 4 (252:0 252:1 252:2 252:3) Dedicated Hot Spares for the Linux logical drive: 0 Global Hot Spares: 0 [INFO ] Check for DOM0 system disk [INFO ] Check for DOM0 with inactive Linux system disk [INFO ] Valid DOM0 with inactive Linux system disk is detected [INFO ] Number of partitions on the system device /dev/sda: 4 [INFO ] Higher partition number on the system device /dev/sda: 4 [INFO ] Last sector on the system device /dev/sda: 3509759999 [INFO ] End sector of the last partition on the system device /dev/sda: 3509759000 [INFO ] Unmount /EXAVMIMAGES from ocfs2 partition on /dev/sda3 [INFO ] Mount ocfs2 partition /dev/sda3 to /EXAVMIMAGES [INFO ] Remove inactive system logical volume /dev/VGExaDb/LVDbSys1 [INFO ] Remove logical volume /dev/VGExaDbOra/LVDbOra1 [INFO ] Remove volume group VGExaDbOra [INFO ] Remove physical volume /dev/sda4 [INFO ] Remove partition /dev/sda4 [INFO ] Re-calculate end sector of the last partition after removing of /dev/sda4 partition [INFO ] End sector of the last partition on the system device /dev/sda: 3300035608 [INFO ] Check for existing first boot system image /EXAVMIMAGES/System.first.boot.12.1.2.1.0.141205.2.img [INFO ] Saving /EXAVMIMAGES/System.first.boot.12.1.2.1.0.141205.2.img in /var/log/exadatatmp ... [INFO ] First boot system image saved in /var/log/exadatatmp/System.first.boot.12.1.2.1.0.141205.2.img [INFO ] Unmount /EXAVMIMAGES from /dev/sda3 [INFO ] Remove partition /dev/sda3 [INFO ] Re-calculate end sector of the last partition after removing of /dev/sda3 partition [INFO ] End sector of the last partition on the system device /dev/sda: 240132159 [INFO ] Create primary ocfs2 partition 3 using 240132160 3509758999 [INFO ] Create ocfs2 partition on /dev/sda3 [INFO ] Mount ocfs2 partition on /dev/sda3 to /EXAVMIMAGES [INFO ] Restoring /var/log/exadatatmp/System.first.boot.12.1.2.1.0.141205.2.img into /EXAVMIMAGES ... [INFO ] Logical volume LVDbSys2 exists in volume group VGExaDb [INFO ] Grub version in /boot/grub/grub.stage.version: 0.97-81.0.1.el6 [INFO ] Grub rpm version: 0.97-13.10.0.1.el5 [INFO ] Copying /usr/share/grub/x86_64-redhat/* to /boot/grub ... [INFO ] Create filesystem on device /dev/sda1 [INFO ] Tune filesystem on device /dev/sda1 GNU GRUB version 0.97 (640K lower / 3072K upper memory) [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.] grub> root (hd0,0) Filesystem type is ext2fs, partition type 0x83 grub> setup (hd0) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... failed (this is not fatal) Running "embed /grub/e2fs_stage1_5 (hd0,0)"... failed (this is not fatal) Running "install /grub/stage1 (hd0) /grub/stage2 p /grub/grub.conf "... succeeded Done. grub> quit 9. (first dbnode) cd /root/install/linux-x64
Applying Elastic Config...
Applying Elastic configuration... Searching Subnet 172.16.2.x.............. 8 live IPs in 172.16.2.x.................................................................................................................................................. Exadata node found 172.16.2.44.. Configuring node : 172.16.2.46...................................................................................... Done Configuring node : 172.16.2.46 Configuring node : 172.16.2.45.................................................................................... Done Configuring node : 172.16.2.45 Configuring node : 172.16.2.40.................................................................................................................................................... Done Configuring node : 172.16.2.40 Configuring node : 172.16.2.39........................................................................................................................................... Done Configuring node : 172.16.2.39 Configuring node : 172.16.2.38.................................................................................................................................................... Done Configuring node : 172.16.2.38 Configuring node : 172.16.2.37............................................................................................................................. Serial console stopped. Connection to scas08adm01-c closed.
11. Connect cables for management and client networks 12. (all dbnodes and cells) reboot 13. (first dbnode) Run ibhosts and verify that all nodes show the correct IP addresses and hostnames. There should also be no 14. (first dbnode) run OEDA installation tool to deploy
$ ./install.sh -cf customer_name-configFile.xml -l
1. Validate Configuration File 2. Create Virtual Machine 3. Create Users 4. Setup Cell Connectivity 5. Create Cell Disks 6. Create Grid Disks 7. Configure Cell Alerting 8. Install Cluster Software 9. Initialize Cluster Software 10. Install Database Software 11. Relink Database with RDS 12. Create ASM Diskgroups 13. Create Databases 14. Apply Security Fixes 15. Install Exachk 16. Create Installation Summary 17. Resecure Machine
Procedure if Re-imaging (rather than using applyConfig) for a New Rack
Adding an X5 Node into an Existing X5 Rack
Adding an X5 Node into a pre-X5 Older Rack
The minimum software version for an X5 server is 12.1.2.1.0. The elastic configuration procedure outlined above was not designed specifically to add X5's into older racks. Alternative methods such as reimaging the servers with a preconf file populated with MAC addresses is the suggested method for this. ElasticConfig may work however as long as the port mappings are the same.
Scripts and Log Information
The ./applyElasticConfig.sh script bundled with OEDA calls "/opt/oracle.SupportTools/firstconf/elasticConfig.sh" Log file information is available under the OEDA directory at /root/install/linux-x64/log
Additional log file information is available at - /var/log/cellos/elasticConfig.log and /var/log/cellos/elasticConfig.trc
Attachments This solution has no attachment |
||||||||||||||||||||||||||||||
|