Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-72-2299136.1
Update Date:2017-08-23
Keywords:

Solution Type  Problem Resolution Sure

Solution  2299136.1 :   Running "mammoth -c" Command Prior to BDA 4.8 Upgrade Fails on Oozie Workflow and Hive Metastore Tests - "No such file or directory" and "Permissions Denied" Errors  


Related Items
  • Big Data Appliance X6-2 Hardware
  •  
Related Categories
  • PLA-Support>Eng Systems>BDA>Big Data Appliance>DB: BDA_EST
  •  




In this Document
Symptoms
Cause
Solution
References


Created from <SR 3-15522848771>

Applies to:

Big Data Appliance X6-2 Hardware - Version All Versions and later
Information in this document applies to any platform.

Symptoms

While running the pre-upgrade tasks described in Upgrading Oracle Big Data Appliance(BDA) CDH Cluster to V4.8.0 from V4.4, V4.5, V4.6, V4.7 Releases using Mammoth Software Upgrade Bundle for OL5 or OL6 Document 2250769.1, the "mammoth -c" command reports errors as below:

# mammoth -c

Oozie Workflow Test
----------------------------------------------------------------------------------------
INFO: oozie job ID is: Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep
INFO:
INFO: help' for more information. Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep
INFO:
INFO: help' for more information. Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep
INFO: Test finished in 19 seconds. Details in ooziewf_test.out
ERROR: Oozie Workflow Test failed
cat: `/tmp/logs/oracle/logs/application_1502202642881_0619/*': No such file or directory

Spark Test
----------------------------------------------------------------------------------------
INFO: final status: SUCCEEDED
ERROR: SparkPi results not found
INFO: Test finished in 25 seconds. Details in spark_test.out
SUCCESS: Spark Test succeeded

Hive Metastore Test
----------------------------------------------------------------------------------------
INFO: Create Hive Metastore Table Failed
INFO: Test finished in 31 seconds. Details in metastore_test.out
ERROR: Hive Metastore Test failed

Running the failed tests standalone shows that failures are due to "Permissions denied" errors. 

For example, running the Oozie health check as described in How to Run the Oozie Cluster Verification Tests Standalone on the BDA Document 2018885.1 reports "Permission denied":

cat ooziewf_test.out
INFO: Running oozie test workflow which includes a map reduce step, a sqoop step, a hive step, a streaming step and a pig step
mkdir: cannot create directory '/opt/oracle/ooziewf_test': Permission denied
local temp dir created
/opt/oracle/BDAMammoth/bdaconfig/ooziewf_test.sh: line 58: cd: /opt/oracle/ooziewf_test: No such file or directory
cp: cannot create regular file './ooziewf_test.tar.gz': Permission denied

For example running the Hive Metastore Health test standalone as described in How to Run the Hive Metastore Cluster Verification Test Standalone on the BDA Document 2089880.1 and also reports "Permission denied" errors:

# cat /tmp/create_metastore_table.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
17/08/11 08:23:25 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it.
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.21/jars/hive-common-1.1.0-cdh5.9.0.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> create table metastore_test_tbl_1502457791( a string) ;
OK
Time taken: 1.821 seconds
hive> Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1012)
at jline.console.history.FileHistory.flush(FileHistory.java:82)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:788)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Cause

The root cause is that execute permissions were missing on the /opt/oracle directory.  This caused the oozie and hive health check scripts to fail when running the "mammoth -c" command.

Solution

The solution is as below. Perform all steps as 'root' user on Node 1 of the cluster unless specified otherwise.

1.  Verify the current permissions of /opt/oracle on all cluster nodes with the following command:

# dcli -C ls -ltrd /opt/oracle

 

NOTE: By default, the permissions should be "0755" and should return results as shown below:

# dcli -C ls -ltrd /opt/oracle

<private IP node 1>: drwxr-xr-x 15 root root 4096 Aug 3 11:14 /opt/oracle            -- <private IP> will show the actual internal IP address for each node in the cluster
...
<private IP node n>: drwxr-xr-x 13 root root 4096 Aug 3 11:14 /opt/oracle

2. If any permissions are not the default of "0755," change that on the individual nodes as below.  NOTE: You can also use the 'dcli' command.

# chmod 0755 /opt/oracle

3. Recheck to confirm that the permissions of /opt/oracle on all cluster nodes is now 0755 with:

# dcli -C ls -ltrd /opt/oracle 

4. Once the permissions are restored to the default:

a) Run the tests standalone to confirm they are successful.

b) Once the individual tests are confirmed successful, rerun "./mammoth -c"

# cd /opt/oracle/BDAMammoth

# ./mammoth -c

This should be successful now. 

References

<NOTE:2089880.1> - How to Run the Hive Metastore Cluster Verification Test Standalone on the BDA
<NOTE:2250769.1> - Upgrading Oracle Big Data Appliance(BDA) CDH Cluster to V4.8.0 from V4.4, V4.5, V4.6, V4.7 Releases using Mammoth Software Upgrade Bundle for OL5 or OL6
<NOTE:2018885.1> - How to Run the Oozie Cluster Verification Tests Standalone on the BDA

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback