big-data-europe / docker-hadoop-spark-workbench

[EXPERIMENTAL] This repo includes deployment instructions for running HDFS/Spark inside docker containers. Also includes spark-notebook and HDFS FileBrowser.
688 stars 373 forks source link

command: ["./wait-for-it.sh"] for swarm mode #39

Closed antonkulaga closed 6 years ago

antonkulaga commented 6 years ago

As I understood from the issue tracker Docker team does not want to add depends_on support to the swarm. In such case it will make sense to add https://github.com/vishnubob/wait-for-it to check the datanode always starts after namenode

earthquakesan commented 6 years ago

@antonkulaga that's indeed the case, I have injected wait-for-it script into hadoop on this branch. I will put an action to update docker-hadoop-spark-workbench, I have very clear idea on how to run it with swarm without issues. Check out this distributed hbase setup for example.

antonkulaga commented 6 years ago

I do not see wait script there but I see SERVICE_PRECONDITION. How does it work? It does not look like docker-compose standard instruction. Have you also check that it works in swarm mode? (I see that some things in swarm mode are not supported but no warnings are shown)

earthquakesan commented 6 years ago

@antonkulaga it's not standard docker thing, it's an ENV variable. entrypoint bash script is polling services from that variable. See entrypoint here