Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-79-1682581.1
Update Date:2017-01-21
Keywords:

Solution Type  Predictive Self-Healing Sure

Solution  1682581.1 :   Oracle Big Data Appliance Software / Service Roles Layout on V3.0.0 / V3.0.1  


Related Items
  • Big Data Appliance X3-2 In-Rack Expansion
  •  
  • Big Data Appliance X5-2 Starter Rack
  •  
  • Big Data Appliance X4-2 Starter Rack
  •  
  • Big Data Appliance X5-2 Hardware
  •  
  • Big Data Appliance X3-2 Full Rack
  •  
  • Big Data Appliance Integrated Software
  •  
  • Big Data Appliance X5-2 In-Rack Expansion
  •  
  • Big Data Appliance X4-2 In-Rack Expansion
  •  
  • Big Data Appliance X3-2 Hardware
  •  
  • Big Data Appliance X4-2 Full Rack
  •  
  • Big Data Appliance X4-2 Hardware
  •  
  • Big Data Appliance X3-2 Starter Rack
  •  
  • Big Data Appliance X5-2 Full Rack
  •  
Related Categories
  • PLA-Support>Eng Systems>BDA>Big Data Appliance>DB: BDA_EST
  •  




In this Document
Purpose
Scope
Details
 Overview
 Single Rack Cluster
 Multirack Clusters
 In-Rack Expansions Software Layout
 Oracle NoSQL Database Installation


Applies to:

Big Data Appliance X4-2 In-Rack Expansion - Version All Versions to All Versions [Release All Releases]
Big Data Appliance X4-2 Starter Rack - Version All Versions to All Versions [Release All Releases]
Big Data Appliance X3-2 Starter Rack - Version All Versions to All Versions [Release All Releases]
Big Data Appliance X3-2 Full Rack - Version All Versions to All Versions [Release All Releases]
Big Data Appliance X3-2 In-Rack Expansion - Version All Versions to All Versions [Release All Releases]
Linux x86-64

Purpose

This information details the software layout for CDH5 services on Oracle Big Data Appliance (Sun Fire X4270 M2, Oracle Big Data Appliance X3-2, Oracle Big Data Appliance X4-2, and Oracle Big Data Appliance X5-2) Starter Rack, Full Rack, and Multirack installations when using Mammoth versions 3.0.0 /3.0.1.

Scope

 This information is intended for the BDA Administrator as well as for developers.

Details

Overview

Individual services run only on designated nodes of a Hadoop cluster. Services that run on all nodes run on all racks of a multirack installation. The tables provided detail where CDH services run in a single rack cluster and multirack cluster. In addition the tables show the service layout for Starter Rack installations of 6 nodes.

Single Rack Cluster

The table below shows the service type, the service roles, node name, and where the services run on the primary rack for the servers in the Big Data Cluster.

Note:. Node01 is the first server in the cluster (server 1, 7, 10, or 13), and Nodenn is the last server in the cluster (server 6, 9, 12, or 18). Starter Rack clusters contain 6 nodes.  The next smallest cluster supported is nine nodes and all clusters with sizes of nine nodes and up will have the standard 18-node layout as shown in the Oracle Big Data Appliance Full Rack column. For clusters of size nine nodes or more, NodeManagers will not reside on Node01 and Node02.  For Starter Rack clusters NodeManagers also reside on Node01 and Node02


Service Locations for One or More CDH Clusters in a Single Rack

Service TypeService RoleNode NameOracle Big Data Appliance Starter RackOracle Big Data Appliance Full Rack
Cloudera Management Services Cloudera Manager Agents All nodes Node01 to Node06 Node01 to Nodenn
Cloudera Management Services Cloudera Manager Server Node03 Node03 Node03
HDFS Balancer First NameNode Node01 Node01
HDFS DataNode All Nodes Node01 to Node06 Node01 to Nodenn
HDFS NameNode Failover Controller First NameNode Node01 Node01
HDFS NameNode Failover Controller Second NameNode Node02 Node02
HDFS First NameNode First NameNode Node01 Node01
HDFS Second NameNode Second NameNode Node02 Node02
HDFS JournalNode First NameNode Node01 Node01
HDFS JournalNode Second NameNode Node02 Node02
HDFS JournalNode Node03 Node03 Node03
Hive Hive Server Node04 Node04 Node04
Hue Beeswax Server Node04 Node04 Node04
Hue Hue Server Node04 Node04 Node04
Yarn JobHistory Node03 Node03 Node03
Yarn ResourceManager Node03, Node04  Node03, Node04 Node03, Node04
Yarn NodeManager

All Nodes - Starter Rack only

Node03 to Nodenn

Node01, Node02, Node03, Node04, Node05, Node06 Node03 to Nodenn
Solr   Node04 Node04 Node04
MySQL MySQL Backup Server Second NameNode Node02 Node02
MySQL MySQL Master Server Node03 Node03 Node03
ODI Agent (If selected to be installed and Big Data Connectors are licensed) Oracle Data Integrator Agent Node04 Node04 Node04
Oozie Oozie Server Node04 Node04 Node04
Puppet Puppet Agents All nodes Node01 to Node06 Node01 to Nodenn
Puppet Puppet Master First NameNode Node01 Node01
ZooKeeper ZooKeeper Server First NameNode, Second NameNode, Node03 Node01 to Node03 Node01 to Node03

 

Multirack Clusters

When multiple racks are configured as a single CDH cluster, some critical services are moved to the first server of the second rack.

Critical services on the first server of the second rack:

  •     Balancer
  •     Failover Controller
  •     Journal Node
  •     NameNode 1
  •     ZooKeeper Server

The DataNode, Cloudera Manager agent, and Puppet services also run on this server.

Service Locations in the First Rack of a Multirack CDH Cluster

Service TypeService RoleNode NameOracle Big Data Appliance Full Rack
Cloudera Management Services Cloudera Manager Agents All nodes Node01 to Nodenn
Cloudera Management Services Cloudera Manager Server Node03 Node03
HDFS DataNode All Nodes Node01 to Nodenn
HDFS NameNode Failover Controller Second NameNode Node02
HDFS JournalNode Second NameNode Node02
HDFS JournalNode Node03 Node03
HDFS Second NameNode Second NameNode Node02
Hive Hive Server Node04 Node04
Hue Beeswax Server Node04 Node04
Hue Hue Server Node04 Node04
MySQL MySQL Backup Server Second NameNode Node02
MySQL MySQL Master Server Node04  Node03
ODI Agent (If selected to be installed and Big Data Connectors are licensed) Oracle Data Integrator Agent Node04 Node04
Oozie Oozie Server Node04 Node04
Puppet Puppet Agents All nodes Node01 to Nodenn
Puppet Puppet Master First NameNode Node01
Solr   Node04 Node04
Yarn NodeManager  Node01, Node03 to Nodenn Node01, Node03 to Nodenn
Yarn ResourceManager  Node03, Node04 Node03, Node04
ZooKeeper ZooKeeper Server Second NameNode Node02
ZooKeeper ZooKeeper Server  Node03 Node03

* Nodenn includes the servers in additional racks.

 

In-Rack Expansions Software Layout

The flexibility of an In-Rack Expansion provides the ability to configure the cluster with different options. For In-Rack Expansions the following installation options are available:

1. With a Starter Rack and one In-Rack Expansion

a. Create a new cluster on the new six nodes. In this installation the new 6-node cluster will have the standard 6-node layout as shown in the Oracle Big Data Appliance Starter Rack Column above.

b. Extend the existing 6-node cluster on to the new servers to become a 12-node cluster. In this installation the new 12-node cluster will have the same layout as a Oracle Big Data Appliance Full Rack cluster. Note that in this option there will be no NodeManagers on nodes 1 and 2.

 

2. With a Starter Rack and two In-Rack Expansions the following options are available:

a. Create three 6-node clusters. In this case the software layout / service roles will be the same as in the Oracle Big Data Appliance Starter Rack column.

b. Create two 9-node clusters. In this case the software layout / service roles will be the same as in the Oracle Big Data Appliance Full Rack column.

c. Create a 6-node cluster and a 12-node cluster. In this case the software layout / service roles will be the same as in the Oracle Big Data Appliance Starter Rack column for the 6-node cluster. The software layout / service roles will be the same as in the Oracle Big Data Appliance Full Rack column for the 12-node cluster.

d. Create a full 18-node cluster. In this case the software layout / service roles will be the same as in the Oracle Big Data Appliance Full Rack column.

 

3. With a Starter Rack and an extension of 3-nodes:

It is possible to extend the cluster by only 3-nodes. Once this is done and the cluster becomes a 9-node cluster the 6-node Starter Rack configuration software layout / service roles will become the same as the Oracle Big Data Appliance Full Rack cluster. The NodeManagers on node01 and node02 will no longer be there. There will then be 7 NodeManagers instead of 6 NodeManagers. There will also be an increase in stability and suitability for a production cluster since it will no longer be running NodeManagers on the name nodes.

 

Oracle NoSQL Database Installation

As of Mammoth 2.2.0 installing a cluster with both Hadoop and NoSQL Database is no longer supported. Instead installing either a NoSQL Database cluster or a Hadoop cluster is supported on Oracle Big Data Appliance Hardware. A NoSQL Database cluster can be 3, 6, 9, 12, 15 or 18 servers in size. It cannot currently be extended once it is installed. Note that the configuration utility will not currently handle a 3 node NoSQL cluster directly.  Follow up in a SR with Oracle Support to achieve that configuration.

When upgrading an older version cluster that has NoSQL DB installed in addition to Hadoop, it will be required to remove the NoSQL DB installation first before doing an upgrade. To facilitate this use the mammoth-reconfig remove nosql option on an Oracle Big Data Appliance version 2.1 or earlier cluster to remove the current NoSQL DB installed alongside Hadoop. This option is available after extracting the Mammoth version 2.2.0 bundle but before running "mammoth -p" to do the upgrade.


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback