USCDataScience / sparkler

Spark-Crawler: Apache Nutch-like crawler that runs on Apache Spark.
http://irds.usc.edu/sparkler/
Apache License 2.0
410 stars 143 forks source link

Elasticsearch for Sparkler - Containerization Logic #212

Closed felixloesing closed 3 years ago

felixloesing commented 3 years ago

We are setting up an Elasticsearch backend for Sparkler. This will serve as another pipeline for data persistence parallel to the existing Apache Solr connector.

We want to make sure that we are adding the new services (Elasticsearch and Kibana) in a way that is easy to maintain and makes it possible to use either Solr or Elasticsearch without much overhead. During our team discussion, we were thinking about running three different containers when using Elasticsearch, namely a container running Elasticsearch, one running Kibana, and another container running Sparkler.

Currently, Solr is being pulled into the Sparkler Docker image during build, so this results in the following questions for the Sparkler Committers:

We understand that this is an ongoing discussion, and we really value your feedback on this.

Thanks in advance!

thammegowda commented 3 years ago

Hey @felixloesing !

Thanks for bringing up this discussion and contributing to Sparkler. Elasticsearch, Kibana, dockers all sound good.

Does the three-container setup make sense for the project or do you prefer a different structure?

I think three container setup is fine in theory, but it may make setups a bit more complicated. My suggestion: let's make all these work in a single container. that's step one. Then after proper testing, move the services to different containers. that's step two. If you are confident enough you could skip step one and jump to two, otherwise let's go step-by-step. We definitely need one container setup for testing and debugging.

Should we reuse the existing Sparkler container and always pull in Solr (also in Elasticsearch environments)?

No. let's build a separate one for elasticsearch env that does not have solr. Let's make a copy of the current docker file that has solr, and modify it with elasticsearch setup.

Would the Sparkler + Elasticsearch network be started with a new script made by us (which starts the three containers and configures the network) or should an existing script be updated to support an optional command line argument to select Elasticsearch?

Please make a new script for elastic.

We can obviously remove/retire old scripts and images as we make progress.

felixloesing commented 3 years ago

Thank you for your detailed feedback, @thammegowda!

We will incorporate your suggestions here into the development process and I will use this issue to keep you posted about potential problems or changes to our approach, as necessary.

nhandyal commented 3 years ago

@thammegowda

We definitely need one container setup for testing and debugging.

Can you expand on this? docker-compose makes it easy to bring up a service with multiple containers and communicate between each. I don't think testing / debugging will be more difficult with a multi-container approach, you'd just need to ssh into each container as necessary. It'd be helpful to understand the use case for more context though.

Our main concern with a single image is the size and the frequency of needing to rebuild (a change in any service requires a rebuild of the entire image).

thammegowda commented 3 years ago

@nhandyal I don't have much experience with docker-compose. If you think its easy, then good, we shall use multiple containers and docker-compose for testing as well. Thanks.