"Spark is not working with Cloudera HA cluster"

michael_ahnmichael_ahn Member Posts: 2 Contributor I
edited June 2019 in Help

I configured a Radoop connection to a HA Cloudera cluster and access to Hive and Impala works. But when I try to start a Spark Operator it fails when the Spark Application Master is started:

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
17/04/24 10:52:57 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
Unknown/unsupported param List(--executor-memory, 58880m, --executor-cores, 40, --properties-file, /mnt/disk1/yarn/nm/usercache/datameer/appcache/application_1492760101213_0043/container_e12_1492760101213_0043_01_000001/__spark_conf__/__spark_conf__.properties)

Usage: org.apache.spark.deploy.yarn.ApplicationMaster [options]
Options:
--jar JAR_PATH Path to your application's JAR file
--class CLASS_NAME Name of your application's main class
--primary-py-file A main Python file
--primary-r-file A main R file
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to
place on the PYTHONPATH for Python apps.
--args ARGS Arguments to be passed to your application's main class.
Multiple invocations are possible, each will be passed in order.
--properties-file FILE Path to a custom Spark properties file.

I created the connection with the exported client configuration from Cloudera manager:

config.PNGConfiguration

I also attached the XML configuration

Best Answer

  • ptothptoth Employee, Member Posts: 1 Contributor I
    Solution Accepted

    Hi,

    Please try switching Spark Version to Spark 2.0.

    Best,

    Peter

Answers

  • michael_ahnmichael_ahn Member Posts: 2 Contributor I

    I installed the Cloudera Spark 2.1 parcel and this works perfectly.

     

    Thanks!

Sign In or Register to comment.