Who we are

We are located in Hanover (Germany), and we embrace big data technologies. We help our international customers to understand new technologies, to select the right building blocks and to tailor them to individual business cases.

What we do

We design systems for unstructured and structured data that scale. We build resilient connections between systems. We implement and coach in big data technologies.

Technologies

We scale your data processing with Hadoop, Spark or streaming technologies like Apache Flink oder Apache Storm. We create analytics tools based on Apache Hue, Apache Pig, Presto, Hive, Cassandra or HBase. We implement resilient interconnected data processing with Oozie, Airflow or Schedoscope. Read on…

Latest posts from our developer blog

GDELT on SCDF : Bootstrapping spring cloud data flow 1.7.0 on kubernetes using kubectl

In the first part of our planned blog posts (processing GDELT data with SCDF on kubernetes) we go through the steps to deploy the latest Spring Cloud Data Flow (SCDF) Release 1.7.0 on Kubernetes , including the latest version of starter apps…

Blog post series: Processing gdeltproject.org feeds with Spring Cloud Data Flow on Kubernetes

We are starting a blog post series to dig deeper into the capabilities of Spring Cloud Data Flow (SCDF) running on Kubernetes. This blog post will be updated when new posts have been published. List of blog posts for quick access: This…

how to use dynamic allocation in a oozie spark action on CDH5

using spark's dynamic allocation feature in a oozie spark action can be a tricky. First you need to make sure that dynamic allocation is actually available on your cluster. Navigate to your "Spark" service, then "Configuration" and search…

spark oozie action jobs not showing up on spark history server

If you execute spark jobs within an oozie workflow using a <spark> action node on a Cloudera CDH5 cluster, your job may not show up on your spark history server. Even if you configured all these things using the cloudera manager, your…