sopaoglu / Standalone-Spark-Cluster

Standalone Spark Cluster is built by using docker swarm
1 stars 0 forks source link

Worker : invalid mount config for type #1

Open Aalnafessah opened 7 years ago

Aalnafessah commented 7 years ago

Hi Thanks for the instruction. When i type "docker service ps SERVICEID" i got this error : " invalid mount config for type ..." "No such image: singularities/…"

When i do the same this with master , i got this error: " invalid mount config for type ..."

Can you help ?

sopaoglu commented 7 years ago

Can you share your docker version with me? To do that run following command docker --version

Also can you send the results of following command:

docker service ls

Aalnafessah commented 7 years ago

This is my docker version : Docker version 17.06.0-ce, build 02c1d87

docker service ls:

screen shot 2017-10-29 at 1 42 23 pm

And this is the result of " docker stack ps spark_cluster ":

screen shot 2017-10-29 at 1 45 00 pm
sopaoglu commented 7 years ago

My docker version is 17.09.0. If you upgrade your docker, it may solve your problem. However, you can try following way before upgrading the docker.

First of all, run following command on each worker node

docker swarm leave

Secondly, on manager node

docker service rm spark_cluster_worker docker service rm spark_cluster_master

Then, run on manager node:

sudo docker stack deploy --compose-file=docker-compose.yml spark

After running the command above, check status of installation. If it is completed, add again worker node to the cluster(you can find necessary command to do that in installation guideline).

Then ,

sudo docker service scale spark_worker=

I hope, it will solve your problem

Aalnafessah commented 7 years ago

Thanks for your response. After i type "docker stack deploy --compose-file=docker-compose.yml spark" i got this message "Ignoring unsupported options: links "

screen shot 2017-10-30 at 10 54 36 am

I am not sure way !!!

sopaoglu commented 7 years ago

Ok there is no problem. Now, you add worker nodes to the cluster(I have mentioned that how can you do that in installation guideline). After that, run following command

sudo docker service scale spark_worker=5

If you want to check your cluster situation, run following command

docker service create \ --name=viz \ --publish=8081:8080/tcp \ --constraint=node.role==manager \ --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \ dockersamples/visualizer

It will provides an interface to watch the situation of your cluster

Now, open browser and check

:8081
Aalnafessah commented 7 years ago

I have done what you have said. I have done the following : For each worker node: "docker swarm leave". For Master node "docker service rm spark_cluster_worker" "docker service rm spark_cluster_master"

This what i get after workers leaved : screen shot 2017-10-31 at 9 52 17 am

Then i have added tow workers node to the swarm: screen shot 2017-10-31 at 10 00 07 am

Then i have run this command to scale workers to 4 : screen shot 2017-10-31 at 10 01 49 am

This is the final result: screen shot 2017-10-31 at 10 03 21 am

The same error which is ( invalid mount config for type…) any suggestion to overcome the problem?