GDELT on SCDF : Bootstrapping spring cloud data flow 1.7.0 on kubernetes using kubectl

In the first part of our planned blog posts (processing GDELT data with SCDF on kubernetes) we go through the steps to deploy the latest Spring Cloud Data Flow (SCDF) Release 1.7.0 on Kubernetes , including the latest version of starter apps that will be used in the examples.

We stick to the manual steps described here in the official spring cloud dataflow documentation to deploy all components to our kubernetes cluster into a dedicated namespace scdf-170 to run the examples.

This installation will not be production-ready, it is about experimenting and to ensure compability as we experienced some incompabilities mixing own source/sink implementations based on Finchley.SR2 and the prepackaged Starter Apps based on Spring Boot 1.5 / Spring Cloud Streams 1.3.X.

Preparations

Clone the git repository to retrieve the neccessary kubernetes configuration files and switch to the 1.7.0.RELEASE branch:


git clone https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes
cd spring-cloud-dataflow-server-kubernetes
git checkout v1.7.0.RELEASE

installation with kubectl

We want to use a dedicated namespace scdf-170 for our deployment, so we create it first:


echo '{ "kind": "Namespace", "apiVersion": "v1", "metadata": { "name": "scdf-170", "labels": { "name": "scdf-170" } } }' | kubectl create -f -

Afterwards we can deploy the dependencies (kafka/mysql/redis) and the spring cloud dataflow server itself:


kubectl create -n scdf-170 -f src/kubernetes/kafka/
kubectl create -n scdf-170 -f src/kubernetes/mysql/
kubectl create -n scdf-170 -f src/kubernetes/redis/
kubectl create -n scdf-170 -f src/kubernetes/metrics/metrics-svc.yaml
kubectl create -n scdf-170 -f src/kubernetes/server/server-roles.yaml
kubectl create -n scdf-170 -f src/kubernetes/server/server-rolebinding.yaml
kubectl create -n scdf-170 -f src/kubernetes/server/service-account.yaml
kubectl create -n scdf-170 -f src/kubernetes/server/server-config-kafka.yaml
kubectl create -n scdf-170 -f src/kubernetes/server/server-svc.yaml
kubectl create -n scdf-170 -f src/kubernetes/server/server-deployment.yaml

verify and enable access


kubectl -n scdf-170 get all

Output should look like:

NAME                                READY     STATUS    RESTARTS   AGE
pod/kafka-broker-696786c8f7-fjp4p   1/1       Running   1          5m
pod/kafka-zk-5f9bff7d5-tmxzg        1/1       Running   0          5m
pod/mysql-f878678df-2t4d6           1/1       Running   0          5m
pod/redis-748db48b4f-8h75x          1/1       Running   0          5m
pod/scdf-server-757ccb576c-9fssd    1/1       Running   0          5m
 
NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/kafka         ClusterIP      10.98.11.191             9092/TCP                     5m
service/kafka-zk      ClusterIP      10.111.45.25             2181/TCP,2888/TCP,3888/TCP   5m
service/metrics       ClusterIP      10.104.11.216            80/TCP                       5m
service/mysql         ClusterIP      10.99.141.105            3306/TCP                     5m
service/redis         ClusterIP      10.99.0.146              6379/TCP                     5m
service/scdf-server   LoadBalancer   10.101.180.212        80:30884/TCP                 5m
 
NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kafka-broker   1         1         1            1           5m
deployment.apps/kafka-zk       1         1         1            1           5m
deployment.apps/mysql          1         1         1            1           5m
deployment.apps/redis          1         1         1            1           5m
deployment.apps/scdf-server    1         1         1            1           5m
 
NAME                                      DESIRED   CURRENT   READY     AGE
replicaset.apps/kafka-broker-696786c8f7   1         1         1         5m
replicaset.apps/kafka-zk-5f9bff7d5        1         1         1         5m
replicaset.apps/mysql-f878678df           1         1         1         5m
replicaset.apps/redis-748db48b4f          1         1         1         5m
replicaset.apps/scdf-server-757ccb576c    1         1         1         5m

As the scdf server will automatically generate a random password for the user “user” on first startup, we need to grep the log output (using your individual scdf-service pod name, see output of previous command):


kubectl logs -n scdf-170 scdf-server-757ccb576c-9fssd | grep "Using default security password"

output should look like

Using default security password: 8d6b58fa-bf1a-4a0a-a42b-c7b03ab15c60 with roles 'VIEW,CREATE,MANAGE'

To access the UI (and rest-api for the cli) you can create a port-forward to the scdf-server pod (using your individual scdf-service pod name, see output of “kubectl -n scdf-170 get all”):


kubectl -n scdf-170 port-forward scdf-server-757ccb576c-9fssd 2345:80

where 2345 would be the local port on your machine where you can now access the ui and the rest-api.

Install the dataflow cli

You can download the jar file directly from spring’s maven repository:


wget https://repo.spring.io/release/org/springframework/cloud/spring-cloud-dataflow-shell/1.7.0.RELEASE/spring-cloud-dataflow-shell-1.7.0.RELEASE.jar

and start the cli:


java -jar spring-cloud-dataflow-shell-1.7.0.RELEASE.jar \
--dataflow.username=user \
--dataflow.password= \
--dataflow.uri=http://localhost:2345

The output should look like this:

   ...
 / ___| _ __  _ __(_)_ __   __ _   / ___| | ___  _   _  __| |
 \___ \| '_ \| '__| | '_ \ / _` | | |   | |/ _ \| | | |/ _` |
  ___) | |_) | |  | | | | | (_| | | |___| | (_) | |_| | (_| |
 |____/| .__/|_|  |_|_| |_|\__, |  \____|_|\___/ \__,_|\__,_|
  ____ |_|    _          __|___/                 __________
 |  _ \  __ _| |_ __ _  |  ___| | _____      __  \ \ \ \ \ \
 | | | |/ _` | __/ _` | | |_  | |/ _ \ \ /\ / /   \ \ \ \ \ \
 | |_| | (_| | || (_| | |  _| | | (_) \ V  V /    / / / / / /
 |____/ \__,_|\__\__,_| |_|   |_|\___/ \_/\_/    /_/_/_/_/_/
 
1.7.0.RELEASE
 
Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help".
dataflow:>

Install starter apps

After the initial installation there are no applications (task/stream) registed. As you can see here (for stream apps), you need to pick a explicit combination of the packaging format (jar vs docker) and the messaging technology (Kafka vs RabbitMQ).

We will go for docker (because we are running on Kubernetes) and Kafka 0.10 as the messaging layer based on newer Spring Boot and Spring Cloud Stream versions (2.0.x + 2.0.x).

We can register all the available starter apps with a single cli command:

dataflow:>app import --uri http://bit.ly/Darwin-SR2-stream-applications-kafka-docker

Output:

Successfully registered .........
.................................
...... processor.pmml.metadata, sink.router.metadata, sink.mongodb]

and the Spring Cloud Task Starter Apps (based on Spring Boot 2.0.x + Spring Cloud Task 2.0.x):


dataflow:> app import --uri http://bit.ly/Dearborn-SR1-task-applications-docker

Successfully registered ..........
..................................
........ task.timestamp, task.timestamp.metadata]

You can verify the installation by listing all available applications:


dataflow:>app list

The Output should look like this:

╔═══╤══════════════╤═══════════════════════════╤══════════════════════════╤════════════════════╗
║app│    source    │         processor         │           sink           │        task        ║
╠═══╪══════════════╪═══════════════════════════╪══════════════════════════╪════════════════════╣
║   │file          │bridge                     │aggregate-counter         │composed-task-runner║
║   │ftp           │filter                     │counter                   │timestamp           ║
║   │gemfire       │groovy-filter              │field-value-counter       │timestamp-batch     ║
║   │gemfire-cq    │groovy-transform           │file                      │                    ║
║   │http          │grpc                       │ftp                       │                    ║
║   │jdbc          │header-enricher            │gemfire                   │                    ║
║   │jms           │httpclient                 │hdfs                      │                    ║
║   │load-generator│image-recognition          │jdbc                      │                    ║
║   │loggregator   │object-detection           │log                       │                    ║
║   │mail          │pmml                       │mongodb                   │                    ║
║   │mongodb       │python-http                │mqtt                      │                    ║
║   │mqtt          │python-jython              │pgcopy                    │                    ║
║   │rabbit        │scriptable-transform       │rabbit                    │                    ║
║   │s3            │splitter                   │redis-pubsub              │                    ║
║   │sftp          │tasklaunchrequest-transform│router                    │                    ║
║   │syslog        │tcp-client                 │s3                        │                    ║
║   │tcp           │tensorflow                 │sftp                      │                    ║
║   │tcp-client    │transform                  │task-launcher-cloudfoundry│                    ║
║   │time          │twitter-sentiment          │task-launcher-local       │                    ║
║   │trigger       │                           │task-launcher-yarn        │                    ║
║   │triggertask   │                           │tcp                       │                    ║
║   │twitterstream │                           │throughput                │                    ║
║   │              │                           │websocket                 │                    ║
╚═══╧══════════════╧═══════════════════════════╧══════════════════════════╧════════════════════╝

The next blog post (implementing a Spring cloud streams application and use it on SCDF) will follow soon.

Blog post series: Processing gdeltproject.org feeds with Spring Cloud Data Flow on Kubernetes

We are starting a blog post series to dig deeper into the capabilities of Spring Cloud Data Flow (SCDF) running on Kubernetes. This blog post will be updated when new posts have been published.

List of blog posts for quick access:

About GDELT

Let’s have a look at the data we want to process:

Supported by Google Jigsaw, the GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, themes, sources, emotions, counts, quotes, images and events driving our global society every second of every day, creating a free open platform for computing on the entire world.

Besides raw data feeds there are powerful APIs to query and even to visualize the GDELT datasets. This Blog post is inspired by the “rss feed for web archiving coverage about climate change” taken from a blog post on gdeltproject.org:

This searches for all articles published in the last hour mentioning
“climate change” or “global warming” and returns the first 200 articles,
ordered by date with the newest articles first and returned as an RSS feed
that includes the primary URL of each article as one item and, as a separate
item, the URL of the mobile/AMP edition of the page, if available.
This demonstrates how to use the API as a data source for web archiving.

We want to use this type of query to pull the latest articles for a configurable query from the GDELT project and do some complex processing on Spring Cloud Data Flow (SCDF). We will download the articles, do some analysis on the content, store and also visualize it.

About Spring Cloud Data Flow (SCDF)

Taken from https://cloud.spring.io/spring-cloud-dataflow:

Spring Cloud Data Flow is a toolkit for building data integration
and real-time data processing pipelines.

Pipelines consist of Spring Boot apps, built using the Spring Cloud Stream
or Spring Cloud Task microservice frameworks. This makes Spring Cloud Data Flow
suitable for a range of data processing use cases, from import/export to
event streaming and predictive analytics.

Part 1 : Setting up the runtime environment

As there are multiple platform implementations available, it can run your streams/tasks on a Local Server, on CloudFoundry, on Kubernetes, on YARN and even MESOS

As these different implementations require different packaging (jar vs docker), we went for kubernetes as the platform and Kafka as the messaging solution in our examples.

You can find instructions to install Spring Cloud Data Flow in our first blog post about GDELT on SCDF: Bootstrapping SCDF on Kubernetes using KUBECTL.

Part 2 : How to pull the gdelt data into SCDF

The second blog post on how to pull data from gdelt.org into SCDF will follow soon.

how to use dynamic allocation in a oozie spark action on CDH5

using spark’s dynamic allocation feature in a oozie spark action can be a tricky.

enable dynamic allocation

First you need to make sure that dynamic allocation is actually available on your cluster. Navigate to your “Spark” service, then “Configuration” and search for “dynamic”.

spark_oozie_action_dynamic_allocation_enabled

Both (shuffle service + dynamic allocation) needs to be enabled.

how to configure the oozie spark action

If you just omit --num-executors in your spark-args definition your job will fall back to configuration defaults and will utilize only two executors:

spark_oozie_action_no_num_executors

If you go for --num-executors 0 your oozie workflow will fail with this strange error message:

Number of executors was 0, but must be at least 1 (or 0 if dynamic executor allocation is enabled). 

Usage: org.apache.spark.deploy.yarn.Client 

[options] Options: 
--jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode) 
--class CLASS_NAME Name of your application's main class (required) 
--primary-py-file A main Python file 
--primary-r-file A main R file 
--arg ARG Argument to be passed to

So even if you enabled the shuffle service on your cluster and set spark.dynamicAllocation.enabled to true, the spark action seems unaware of these settings.

adding missing configuration properties

To get this working you need to provide spark.dynamicAllocation.enabled=true and spark.shuffle.service.enabled=true together with --num-executors 0 in the spark-args section:

<master>yarn-cluster</master>>
<mode>cluster</mode>
<name>spark-job-name</name>
<class>com.syscrest.demo.CustomSparkOozieAction</class>
<jar>${sparkJarPath}</jar>
<spark-opts>
... your job specific options ...
--num-executors 0 
--conf spark.dynamicAllocation.enabled=true 
--conf spark.shuffle.service.enabled=true 
</spark-opts> 
<arg>-t</arg>

spark oozie action jobs not showing up on spark history server

If you execute spark jobs within an oozie workflow using a <spark> action node on a Cloudera CDH5 cluster, your job may not show up on your spark history server. Even if you configured all these things using the cloudera manager, your history server may only lists jobs started on the commandline using spark-submit.

adding missing configuration properties

When using mode = cluster and master = yarn-cluster or yarn-client you need to provide spark.yarn.historyServer.addres and spark.eventLog.dir with the actual adress of your spark history server and the fully qualified hdfs path of the applicationHistory directory and set spark.eventLog.enabled to true.

<master>yarn-cluster</master>>
<mode>cluster</mode>
<name>spark-job-name</name>
<class>com.syscrest.demo.CustomSparkOozieAction</class>
<jar>${sparkJarPath}</jar>
<spark-opts>
... your job specific options ...
--conf spark.yarn.historyServer.address=http://historyservernode:18088
--conf spark.eventLog.dir=hdfs://nameservicehost:8020/user/spark/applicationHistory 
--conf spark.eventLog.enabled=true
</spark-opts> 
<arg>-t</arg>

With these additional configuration properties these oozie-controlled spark jobs should also show up on your spark history server.

fixing spark classpath issues on CDH5 accessing Accumulo 1.7.2

We experienced some strange NoSuchMethorError while migrating a Accumulo based application from 1.6.0 to 1.7.2 running on CDH5. A couple of code changes where necessary moving from 1.6.0 to 1.7.2, but these were pretty straightforward (members visibility changed, some getters were introduced). Everything compiled fine, but when we executed the spark application on the cluster we got an exception that was pointing directly to a line we changed during the migration:

Exception in thread "main" java.lang.NoSuchMethodError: 
        com.syscrest.HelloworldDriver$Opts.getPrincipal()Ljava/lang/String;
        at com.syscrest.HelloworldDriver.main(HelloworldDriver.java:29)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

We double checked our fat jar that we bundle the right version and then we checked the full classpath of the CDH5 spark service.

the distributed spark classpath on CDH5

We noticed that CDH5 (CDH 5.8.2 in our case) already bundles four accumulo jars:

/opt/cloudera/parcels/CDH/jars/accumulo-core-1.6.0.jar
/opt/cloudera/parcels/CDH/jars/accumulo-fate-1.6.0.jar
/opt/cloudera/parcels/CDH/jars/accumulo-start-1.6.0.jar
/opt/cloudera/parcels/CDH/jars/accumulo-trace-1.6.0.jar

And all jars in /opt/cloudera/parcels/cdh/jars are automatically inserted into the spark distributed classpath, as they are listed in /etc/spark/conf/classpath.txt.

spark.{driver,executor}.userClassPathFirst

Before tampering with the spark distributed classpath we tried to get it working using

spark-submit \
  --conf "spark.driver.userClassPathFirst=true" \
  --conf "spark.executor.userClassPathFirst=true" \
  ....

but this resulted in another mismatch of libraries on the classpath and did not solved our initial problem:

org.xerial.snappy.SnappyNative.maxCompressedLength(I)I
java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I
at org.xerial.snappy.SnappyNative.maxCompressedLength(Native Method)
at org.xerial.snappy.Snappy.maxCompressedLength(Snappy.java:316)
at org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:79)
at org.apache.spark.io.SnappyCompressionCodec.compressedOutputStream(CompressionCodec.scala:157)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$4.apply(TorrentBroadcast.scala:199)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$4.apply(TorrentBroadcast.scala:199)
at scala.Option.map(Option.scala:145)
at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:199)
at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:101)
at org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:84)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
at org.apache.spark.rdd.NewHadoopRDD.(NewHadoopRDD.scala:77)
at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:878)
at org.apache.spark.api.java.JavaSparkContext.newAPIHadoopRDD(JavaSparkContext.scala:516)

So we tried to fix the distributed spark classpath directly. Unfortunately /etc/spark/conf/classpath.txt is not modifiable via cloudera manager, and we did not want to change it manually on all nodes.

But as /etc/spark/conf/classpath.txt is being read in /etc/spark/conf/spark-env.sh

# Set distribution classpath. This is only used in CDH 5.3 and later.
export SPARK_DIST_CLASSPATH=$(paste -sd: "$SELF/classpath.txt")

we were able to use Spark Service Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh – Spark (Service-Wide) to tamper with SPARK_DIST_CLASSPATH.

In our case it was sufficient to add the correct accumulo jars (version 1.7.2 from the installed parcel) at the beginning of the list so they take precedence over the outdated ones:

SPARK_DIST_CLASSPATH=/opt/cloudera/parcels/ACCUMULO/lib/accumulo/lib/accumulo-core.jar:$SPARK_DIST_CLASSPATH
SPARK_DIST_CLASSPATH=/opt/cloudera/parcels/ACCUMULO/lib/accumulo/lib/accumulo-fate.jar:$SPARK_DIST_CLASSPATH
SPARK_DIST_CLASSPATH=/opt/cloudera/parcels/ACCUMULO/lib/accumulo/lib/accumulo-start.jar:$SPARK_DIST_CLASSPATH
SPARK_DIST_CLASSPATH=/opt/cloudera/parcels/ACCUMULO/lib/accumulo/lib/accumulo-trace.jar:$SPARK_DIST_CLASSPATH
export SPARK_DIST_CLASSPATH

how to collect cloudera manager usage data with google analytics

The Cloudera Manager is already capable of tracking usage data via Google Analytics, but that data is beeing send to a cloudera account. This blog post is about configuring the cloudera manager and changing the tracking id so that these usage metrics are being send to your own account.

Setup Google Analytics

Log into your Google Analytics account and create a new tracking id (looks like UA-XXXXXXX-X).

Enable the tracking of usage data

Log into your Cloudera Manager instance and browse to “Administration” -> “Settings”

add_google_analytics_cloudera_manager_01

and make sure that “Allow Usage Data Collection” is enabled:

add_google_analytics_cloudera_manager_02

replace the tracking id

Now you need to edit a file on your Cloudera Manager server

cd /usr/share/cmf/webapp/static/ext/google-analytics/
nano scmx.js

And replace the existing google apps tracking id (UA-XXXXXXX-X) with your own id and save the file.

add_google_analytics_cloudera_manager_03

You change will be immediately active (no need to restart anything).

Be aware that this change will be overwritten when you update your Cloudera Manager instance so you need to reapply that change after every upgrade.

Patching Oozie in a parcel-based CDH 5.8.0 Installation

This blogpost will guide you to the process of cloning, patching, building and deploying a custom version of the oozie workflow engine based on the cdh 5.8.0 source code that is available on github.
Read more

how to access a remote ha-enabled hdfs in a (oozie) distcp action

how to inject the configuration of a remote ha-hdfs in a distcp call without modifing the local cluster configuration.
Read more

Best Practices using PigServer (embedded pig)

things you should be aware of when executing pig scripts within your own java application using PigServer.
Read more

Passing many parameters from Java action to Oozie workflow

oozie’s ‘capture-output’ is a powerful method the pass dynamic configuration properties from action to action, but you may hit the maximum size limit quite fast. Read more