Posts

fixing spark classpath issues on CDH5 accessing Accumulo 1.7.2

We experienced some strange NoSuchMethorError while migrating a Accumulo based application from 1.6.0 to 1.7.2 running on CDH5. A couple of code changes where necessary moving from 1.6.0 to 1.7.2, but these were pretty straightforward (members visibility changed, some getters were introduced). Everything compiled fine, but when we executed the spark application on the cluster we got an exception that was pointing directly to a line we changed during the migration:

Exception in thread "main" java.lang.NoSuchMethodError: 
        com.syscrest.HelloworldDriver$Opts.getPrincipal()Ljava/lang/String;
        at com.syscrest.HelloworldDriver.main(HelloworldDriver.java:29)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

We double checked our fat jar that we bundle the right version and then we checked the full classpath of the CDH5 spark service.

the distributed spark classpath on CDH5

We noticed that CDH5 (CDH 5.8.2 in our case) already bundles four accumulo jars:

/opt/cloudera/parcels/CDH/jars/accumulo-core-1.6.0.jar
/opt/cloudera/parcels/CDH/jars/accumulo-fate-1.6.0.jar
/opt/cloudera/parcels/CDH/jars/accumulo-start-1.6.0.jar
/opt/cloudera/parcels/CDH/jars/accumulo-trace-1.6.0.jar

And all jars in /opt/cloudera/parcels/cdh/jars are automatically inserted into the spark distributed classpath, as they are listed in /etc/spark/conf/classpath.txt.

spark.{driver,executor}.userClassPathFirst

Before tampering with the spark distributed classpath we tried to get it working using

spark-submit \
  --conf "spark.driver.userClassPathFirst=true" \
  --conf "spark.executor.userClassPathFirst=true" \
  ....

but this resulted in another mismatch of libraries on the classpath and did not solved our initial problem:

org.xerial.snappy.SnappyNative.maxCompressedLength(I)I
java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I
at org.xerial.snappy.SnappyNative.maxCompressedLength(Native Method)
at org.xerial.snappy.Snappy.maxCompressedLength(Snappy.java:316)
at org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:79)
at org.apache.spark.io.SnappyCompressionCodec.compressedOutputStream(CompressionCodec.scala:157)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$4.apply(TorrentBroadcast.scala:199)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$4.apply(TorrentBroadcast.scala:199)
at scala.Option.map(Option.scala:145)
at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:199)
at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:101)
at org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:84)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
at org.apache.spark.rdd.NewHadoopRDD.(NewHadoopRDD.scala:77)
at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:878)
at org.apache.spark.api.java.JavaSparkContext.newAPIHadoopRDD(JavaSparkContext.scala:516)

So we tried to fix the distributed spark classpath directly. Unfortunately /etc/spark/conf/classpath.txt is not modifiable via cloudera manager, and we did not want to change it manually on all nodes.

But as /etc/spark/conf/classpath.txt is being read in /etc/spark/conf/spark-env.sh

# Set distribution classpath. This is only used in CDH 5.3 and later.
export SPARK_DIST_CLASSPATH=$(paste -sd: "$SELF/classpath.txt")

we were able to use Spark Service Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh – Spark (Service-Wide) to tamper with SPARK_DIST_CLASSPATH.

In our case it was sufficient to add the correct accumulo jars (version 1.7.2 from the installed parcel) at the beginning of the list so they take precedence over the outdated ones:

SPARK_DIST_CLASSPATH=/opt/cloudera/parcels/ACCUMULO/lib/accumulo/lib/accumulo-core.jar:$SPARK_DIST_CLASSPATH
SPARK_DIST_CLASSPATH=/opt/cloudera/parcels/ACCUMULO/lib/accumulo/lib/accumulo-fate.jar:$SPARK_DIST_CLASSPATH
SPARK_DIST_CLASSPATH=/opt/cloudera/parcels/ACCUMULO/lib/accumulo/lib/accumulo-start.jar:$SPARK_DIST_CLASSPATH
SPARK_DIST_CLASSPATH=/opt/cloudera/parcels/ACCUMULO/lib/accumulo/lib/accumulo-trace.jar:$SPARK_DIST_CLASSPATH
export SPARK_DIST_CLASSPATH

how to collect cloudera manager usage data with google analytics

The Cloudera Manager is already capable of tracking usage data via Google Analytics, but that data is beeing send to a cloudera account. This blog post is about configuring the cloudera manager and changing the tracking id so that these usage metrics are being send to your own account.

Setup Google Analytics

Log into your Google Analytics account and create a new tracking id (looks like UA-XXXXXXX-X).

Enable the tracking of usage data

Log into your Cloudera Manager instance and browse to “Administration” -> “Settings”

add_google_analytics_cloudera_manager_01

and make sure that “Allow Usage Data Collection” is enabled:

add_google_analytics_cloudera_manager_02

replace the tracking id

Now you need to edit a file on your Cloudera Manager server

cd /usr/share/cmf/webapp/static/ext/google-analytics/
nano scmx.js

And replace the existing google apps tracking id (UA-XXXXXXX-X) with your own id and save the file.

add_google_analytics_cloudera_manager_03

You change will be immediately active (no need to restart anything).

Be aware that this change will be overwritten when you update your Cloudera Manager instance so you need to reapply that change after every upgrade.

Patching Oozie in a parcel-based CDH 5.8.0 Installation

This blogpost will guide you to the process of cloning, patching, building and deploying a custom version of the oozie workflow engine based on the cdh 5.8.0 source code that is available on github.
Read more

how to access a remote ha-enabled hdfs in a (oozie) distcp action

how to inject the configuration of a remote ha-hdfs in a distcp call without modifing the local cluster configuration.
Read more

Upgrading an existing Accumulo 1.6 CDH5 cluster to HDFS HA

Theses steps are neccessary for Accumulo 1.6 after upgrading an existing CDH5 cluster to HDFS HA.
Read more

Moving a Cloudera Manager 5 Installation to a new host – don’t forget the repo folder

Having trouble provisioning new nodes after you have moved you cloudera manager installation to a new host ?
Read more

oozie-graphite available for CDH5

Our open source project oozie-graphite is now available for CDH5.
Read more

Accumulo 1.6 (CDH5) not available for Ubuntu 14.04 (Trusty Tahr)

If you want to deploy a CDH5 Cluster including Accumulo 1.6 on Ubuntu, you will need to stick to Ubuntu 12.04 Precise Pangolin for the time being.
Read more

Increasing mapreduce.job.counters.max on CDH5 YARN (MR2)

How to increase mapreduce.job.counters.max on YARN (MR2) for HUE / HIVE / OOZIE.
Read more

Reinitializing an Accumulo cluster on CDH4 or CDH5

How to manually reinitialize an existing Accumulo cluster on CDH4 or CDH5
Read more