lightbend / fdp-sample-applications

All sample applications for Fast Data Platform
Apache License 2.0
15 stars 6 forks source link

FDP Sample Applications

DISCLAIMER: These sample applications are provided as-is, without warranty. They are intended to illustrate techniques for implementing various scenarios using Fast Data Platform, but they have not gone through a robust validation process, nor do they use all the techniques commonly employed for secure, highly-resilient, production applications. Please use them with appropriate caution.

WARNING: The develop branch works only for Fast Data Platform V2.X on OpenShift. To use these sample applications on Fast Data Platform V1.X on DC/OS, use the branch corresponding to your release (e.g., release/1.3.2) or develop-DCOS for the latest updates. For a particular OpenShift release, look at branches of the form release/2.0.0-OpenShift, for example.

This repository contains the sample applications for Lightbend Fast Data Platform, version 1.3.0 and later. For information about these applications, see the Fast Data Platform documentation, specifically the Sample Applications section. In order to run these applications as is, you must install them in a Fast Data Platform cluster. The documentation provides more details.

Note: At the time of this release, some of the applications were not yet completely ported to OpenShift and Kubernetes from DC/OS. They are the bigdl, nwintrusion, and kstreams apps. Each app's README describes what is already working. A future release of Fast Data Platform for OpenShift and Kubernetes will remove this limitation.

The applications are organized in folders with each of them containing details of how to use and install them as part of your environment. Here's the list of applications:

General Structure of this Repository

This repository is organized as follows:

General Structure for each Application

The directory structure for each application includes some or all of the following contents:

In a few cases, when the app contains several services (e.g., akka-kafka-streams-model-server), each service has its own directory instead of being nested under source/core. These projects will also have build.sbt and project/ SBT files in the root directory, whereas they are under source/core for the other apps. In all cases, the corresponding build.sh knows what to do...

Installing the Prerequisites

Make sure the following prerequisites are installed first:

These supported components must be installed and running on the cluster in advance:

The Fast Data Platform installation instructions provide detailed instructions.

In addition, several certified components are needed by some of the applications:

(Additional optional, certified components are described by each component's documentation.)

Use the Helm charts in supportingcharts to install these certified components. See its README for details. (For DC/OS installations, see the appropriate branch of this repository, as discussed above.)

Installing the Applications

Each application contains detailed instructions on how to build and run the application locally or on a Fast Data Platform cluster on OpenShift or Kubernetes using Helm. See the individual READMEs for each application for details.

Helm is used to install the sample apps.

Each sample application directory has a helm subdirectory with the corresponding Helm charts. Edit the files as desired and run the following command (from this "root" directory):

helm install apps/<app>/helm

Some of the applications (e.g., the deep learning-based anomaly detection application, anomaly-detection) have more complex installation steps. Consult each application's README.md for specific details.

Building the Applications Yourself

As discussed above, there is a top-level build.sh that will build all the applications:

./build.sh

This script builds the archives, Docker images, and updates the Helm charts with the current version set in version.sh. Pass the --help option for command-line options. For example, the version can be overridden with --version 1.2.3.

This script does global setup, like generating YAML files from templates with the correct version string, but you can also build each application individually, as all of them have their own build.sh and each one also has a --help option.

These scripts are the easiest way to build individual apps, which drive the appropriate sbt tasks. Note that each one accepts an argument --push-docker-images, which will push any Docker images to Docker Hub! Hence, you'll need to change the .../build.sbt file in each app to point to your Docker Hub account or other compatible repository, if you use this option. The app READMEs provide more specific details.

Note: Obviously bash is required for these scripts. If you're on Windows without access to bash, you can run the sbt and docker commands directly that these shell scripts invoke to build the artifacts, etc.