Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.
Waimak aims to abstract the more complex parts of Spark application development (such as orchestration) away from the business logic, allowing users to get their business logic in a production-ready state much faster. By using a framework written by Data Engineers, the teams defining the business logic can write and own their production code.
Our metaphor to describe this framework is the braided river – it splits and rejoins to itself repeatedly on its journey. By describing a Spark application as a sequence of flow transformations, Waimak can execute independent branches of the flow in parallel making more efficient use of compute resources and greatly reducing the execution time of complex flows.
We developed Waimak to:
Importantly, Waimak is a framework for building Spark applications by describing a sequence of composed Spark transformations. To create those transformations Waimak exposes the complete Spark API, giving you the power of Apache Spark with added structure.
You can import Waimak into your Maven project using the following dependency details:
<dependency>
<groupId>com.coxautodata</groupId>
<artifactId>waimak-core_2.12</artifactId>
<version>${waimak.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
Waimak marks the Spark dependency as optional so as not to depend on any specific release of Spark, therefore you must specify the version of Spark you wish to use as a dependency. Waimak should run on any version of Spark 2.2+, however the list of officially tested versions is given below.
The following code snippet demonstrates a basic Waimak example taken from the unit tests:
// Required imports
import com.coxautodata.waimak.dataflow.Waimak
// Initialise basic Waimak objects
val emptyFlow = Waimak.sparkFlow(spark)
// Add actions to the flow
val basicFlow = emptyFlow
.openCSV(basePath)("csv_1", "csv_2")
.alias("csv_1", "items")
.alias("csv_2", "person")
.writeParquet(baseDest)("items", "person")
// Run the flow
basicFlow.execute()
This example is very small, but in practice flow definitions can become very large depending of the number of inputs and outputs in a job.
The project wiki page provides best practices for structuring your project when dealing with large flows.
Waimak currently consists of the following modules:
Artifact ID | Purpose | Maven Release |
---|---|---|
waimak-core |
Core Waimak functionality and generic actions | Maven Central |
waimak-configuration-databricks |
Databricks-specific configuration provider using secret scopes (Scala 2.12 only) | Maven Central |
waimak-impala |
Impala implementation of the HadoopDBConnector used for commiting labels to an Impala DB |
Maven Central |
waimak-hive |
Hive implementation of the HadoopDBConnector used for commiting labels to a Hive Metastore |
Maven Central |
waimak-rdbm-ingestion |
Functionality to ingest inputs from a range of RDBM sources | Maven Central |
waimak-storage |
Functionality for providing a hot/cold region-based ingestion storage layer | Maven Central |
waimak-app |
Functionality providing Waimak application templates and orchestration | Maven Central |
waimak-experimental |
Experimental features currently under development | Maven Central |
waimak-dataquality |
Functionality for monitoring and alerting on data quality | Maven Central |
waimak-deequ |
Amazon Deequ implementation of data quality monitoring (Scala 2.12 only) | Maven Central |
Waimak is tested against the following versions of Spark:
Package Maintainer | Spark Version | Scala Version |
---|---|---|
Apache Spark | 3.1.3 | 2.12 |
Apache Spark | 3.2.0 | 2.12 |
Apache Spark | 3.2.1 | 2.12 |
Apache Spark | 3.3.0 | 2.12 |
Apache Spark | 3.3.0 | 2.13 |
You can find the latest documentation for Waimak on the project wiki page. This README file contains basic setup instructions and general project information.
You can also find details of what's in the latest releases in the changelog.
Finally, you can also talk to the developers and other users directly at our Gitter room.
We welcome all users to contribute to the development of Waimak by raising pull-requests. We kindly ask that you include suitable unit tests along with proposed changes.
Waimak is tested against different versions of Spark 2.x to ensure uniform compatibility. The versions of Spark tested by Waimak are given in the <profiles>
section of the POM. You can activate a given profile in the POM by using the -P
flag: mvn clean package -P apache-3.2.1_2.12
The integration tests of the RDBM ingestion module require Docker therefore you must have the Docker service running and the current user must be able to access the Docker service.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Copyright 2018 Cox Automotive UK Limited