Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-79-1985028.1
Update Date:2015-04-01
Keywords:

Solution Type  Predictive Self-Healing Sure

Solution  1985028.1 :   Steps to Follow for Disk Replacement When On-Disk Encryption is Used on Oracle Big Data Appliance  


Related Items
  • Big Data Appliance X3-2 Hardware
  •  
Related Categories
  • PLA-Support>Eng Systems>BDA>Big Data Appliance>DB: BDA_EST
  •  




In this Document
Purpose
Scope
Details
 Steps to Follow Prior to Disk Replacement
 Configuring the Disk for a Disk with On-disk encryption
 Known Issues after replacing an encrypted disk
References


Created from <SR 3-10326268167>

Applies to:

Big Data Appliance X3-2 Hardware - Version All Versions and later
Linux x86-64

Purpose

 Provide configuration steps for use when disk replacement occurs on an Oracle Big Data Appliance with on-disk encryption enabled.

Scope

Oracle Big Data Appliance System Administrator, Support, ACS.

Details

If one of the disks of the BDA Cluster goes bad and needs replacement, additional steps will be required when on-disk encryption is used.

Steps to Follow Prior to Disk Replacement

Prior to replacing the disk perform the steps in the following document:
Steps for Replacing a Disk Drive and Determining its Function on the Oracle Big Data Appliance V2.2.*/V2.3.1/V2.4.0/V2.5.0/V3.0.0/V3.0.1/V3.1.0/V4.0.0/V4.1.0 (Doc ID 1581331.1)

When in the section "Prerequisites for Replacing a Working / Failing Disk" replace Step 5 (umount mountpoint)
with the steps below:

1. For step 5 in the document to unmount the disk, use the umount command below
for an encrypted disk. In the steps below replace nn with the disk being replaced:

# umount `mount -l -t ecryptfs | cut -d' ' -f1 | grep unn`

  

Example below if for /u12:

# umount `mount -l -t ecryptfs | cut -d' ' -f1 | grep u12`


2. To verify the disk has been unmounted issue:

# mount -l -t ecryptfs

  

Configuring the Disk for a Disk with On-disk encryption


1. After the Field Engineer replaces the disk to configure the disk start continue at step 5 of the "Replacing a Disk Drive" section in "Steps for Replacing a Disk Drive and Determining its Function on the Oracle Big Data Appliance V2.2.*/V2.3.1/V2.4.0/V2.5.0/V3.0.0/V3.0.1/V3.1.0/V4.0.0/V4.1.0 (Doc ID 1581331.1)" in case any steps were not completed by the Field Engineer.

2. At the end of the "Replacing a Disk Drive" section reboot the server and issue the "mount_hadoop_dirs" command to encrypt the disk:
Reboot the server:

# reboot

  

3. Create the /unn/hadoop dir

# mkdir /unn/hadoop


Example:

# mkdir /u09/hadoop

  

4. Issue the "mount_hadoop_dirs" command to encrypt the disk:

# mount_hadoop_dirs

  
Enter password when prompted.

Example output:

Enter password to mount disks: Enter password for Cloudera Manager admin account: Attempting to mount with
the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=1cce2abdb277d4fe
Mounted eCryptfs
Restarting DATANODE hdfs-DATANODE-701699ba82598221aab621dad063b74d

  

4. Complete the configuration of the disk as required according to the type of disk it is.

5. If this is an HDFS disk follow:
"How to Configure a Server Disk After Replacement as an HDFS Disk or Oracle NoSQL Database Disk on Oracle Big Data Appliance V2.2.*/V2.3.1/V2.4.0/V2.5.0/V3.0.0/V3.0.1/V3.1.0/V4.0.0/V4.1.0 (Doc ID 1581583.1)."

Notes:
To mount the disk from section "Mount HDFS or Oracle NoSQL Database Partition" just use mount command:

# mount

Example:

# mount /u12

5. Verify with:

# mount -l

Should see something similar to the following:

# mount -l
/dev/md2 on / type ext4 (rw,noatime)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md0 on /boot type ext4 (rw)
/dev/sda4 on /u01 type ext4 (rw,nodev,noatime) [/u01]
/dev/sdb4 on /u02 type ext4 (rw,nodev,noatime) [/u02]
/dev/sdc1 on /u03 type ext4 (rw,nodev,noatime) [/u03]
/dev/sdd1 on /u04 type ext4 (rw,nodev,noatime) [/u04]
/dev/sde1 on /u05 type ext4 (rw,nodev,noatime) [/u05]
/dev/sdf1 on /u06 type ext4 (rw,nodev,noatime) [/u06]
/dev/sdg1 on /u07 type ext4 (rw,nodev,noatime) [/u07]
/dev/sdh1 on /u08 type ext4 (rw,nodev,noatime) [/u08]
/dev/sdi1 on /u09 type ext4 (rw,nodev,noatime) [/u09]
/dev/sdj1 on /u10 type ext4 (rw,nodev,noatime) [/u10]
/dev/sdk1 on /u11 type ext4 (rw,nodev,noatime) [/u11]
/dev/sdl1 on /u12 type ext4 (rw,nodev,noatime) [/u12]
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
<node>:/opt/exportdir on /opt/shareddir type nfs
(rw,tcp,soft,intr,timeo=10,retrans=10,sloppy,vers=4,addr=1**.***.*.**,clientaddr=**.***.*.**)
cm_processes on /var/run/cloudera-scm-agent/process type tmpfs (rw,mode=0751)
cm_cgroups on /var/run/cloudera-scm-agent/cgroups/blkio type cgroup (rw,blkio)
cm_cgroups on /var/run/cloudera-scm-agent/cgroups/cpuacct type cgroup (rw,cpuacct)
cm_cgroups on /var/run/cloudera-scm-agent/cgroups/cpu type cgroup (rw,cpu)
cm_cgroups on /var/run/cloudera-scm-agent/cgroups/memory type cgroup (rw,memory)
/u01/hadoop on /u01/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u02/hadoop on /u02/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u03/hadoop on /u03/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u04/hadoop on /u04/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u05/hadoop on /u05/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u06/hadoop on /u06/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u07/hadoop on /u07/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u08/hadoop on /u08/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u09/hadoop on /u09/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u10/hadoop on /u10/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u11/hadoop on /u11/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)
/u12/hadoop on /u12/hadoop type ecryptfs
(rw,ecryptfs_sig=1cce2abdb277d4fe,ecryptfs_unlink_sigs,ecryptfs_cipher=aes,ecryptfs_key_bytes=16)

  

4. Per the steps in the document add the following into CM:
Click on + sign and add appropriate disk. Example: /u12/hadoop/dfs
Click on Save changes -> Restart Datanode

Note: Do not click on "Reset to inherited value" in CM.

5. After configuring run mount_hadoop_dirs:

a. Reboot server

# reboot

b. Run command:

# mount_hadoop_dirs

 

Known Issues after replacing an encrypted disk

Datanode may show bad health.

1. Check the following:

# cat /unn/hadoop/dfs/current/VERSION

  

Example output:

# cat /u12/hadoop/dfs/current/VERSION
The following may be seen:
cat: /u12/hadoop/dfs/current/VERSION: Input/output error


However other disks will show like the following:

# cat /u11/hadoop/dfs/current/VERSION
#Fri Feb 27 18:32:26 EST 2015
storageID=DS-b08c8998-09a2-468b-87aa-2e3dba2dee2e
clusterID=cluster1
cTime=0
datanodeUuid=299a9ea4-ad8c-457d-949d-8c9e31d23080
storageType=DATA_NODE
layoutVersion=-56


The following in_use.lock may also be seen like the following example:

# ls -la /u12/hadoop/dfs
total 24
drwx------ 3 hdfs hadoop 4096 Feb 27 16:49 .
drwxr-xr-x 4 root root   4096 Feb 27 16:28 ..
drwxr-xr-x 3 hdfs hadoop 4096 Feb 27 16:15 current
-rw-r--r-- 1 hdfs hadoop   37 Feb 27 16:49 in_use.lock

  

Restarting the datanode does not help.

To resolve:

1. Stop the datanode.
CM > hdfs > datanode (which is down) > Actions (upper right) > Stop this Datanode

2. Run the command from as root:

# mv /unn/hadoop/dfs /unn/hadoop/dfs-$(date +"%m_%d_%Y")

  

Example:

# mv /u12/hadoop/dfs /u12/hadoop/dfs-$(date +"%m_%d_%Y")

  

3. Restart datanode --CM > hdfs > datanode (which is down) > Actions (upper right) > Start this Datanode

The datanode should now show good health. 

References

<NOTE:1581583.1> - How to Configure a Server Disk After Replacement as an HDFS Disk or Oracle NoSQL Database Disk on Oracle Big Data Appliance V2.2.*/V2.3.1/V2.4.0/V2.5.0/V3.0.0/V3.0.1/V3.1.0/V4.0.0/V4.1.0
<NOTE:1581331.1> - Steps for Replacing a Disk Drive and Determining its Function on the Oracle Big Data Appliance V2.2.*/V2.3.1/V2.4.0/V2.5.0/V3.0.0/V3.0.1/V3.1.0/V4.0.0/V4.1.0

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback