Learner I michael_ahn
Learner I

Spark is not working with Cloudera HA cluster

I configured a Radoop connection to a HA Cloudera cluster and access to Hive and Impala works. But when I try to start a Spark Operator it fails when the Spark Application Master is started:

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
17/04/24 10:52:57 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
Unknown/unsupported param List(--executor-memory, 58880m, --executor-cores, 40, --properties-file, /mnt/disk1/yarn/nm/usercache/datameer/appcache/application_1492760101213_0043/container_e12_1492760101213_0043_01_000001/__spark_conf__/__spark_conf__.properties)

Usage: org.apache.spark.deploy.yarn.ApplicationMaster [options]
  --jar JAR_PATH       Path to your application's JAR file
  --class CLASS_NAME   Name of your application's main class
  --primary-py-file    A main Python file
  --primary-r-file     A main R file
  --py-files PY_FILES  Comma-separated list of .zip, .egg, or .py files to
                       place on the PYTHONPATH for Python apps.
  --args ARGS          Arguments to be passed to your application's main class.
                       Multiple invocations are possible, each will be passed in order.
  --properties-file FILE Path to a custom Spark properties file.

I created the connection with the exported client configuration from Cloudera manager:


I also attached the XML configuration

RM Staff
RM Staff

Re: Spark is not working with Cloudera HA cluster


Please try switching Spark Version to Spark 2.0.



Learner I michael_ahn
Learner I

Re: Spark is not working with Cloudera HA cluster

I installed the Cloudera Spark 2.1 parcel and this works perfectly.



How can RapidMiner increase participation in our new competitions?
Twitter Feed