big-data-europe / docker-hadoop-spark-workbench

[EXPERIMENTAL] This repo includes deployment instructions for running HDFS/Spark inside docker containers. Also includes spark-notebook and HDFS FileBrowser.
687 stars 372 forks source link

Faile to connect to namenode:8020 #52

Open kkalugerov opened 6 years ago

kkalugerov commented 6 years ago

I used this repo to pick up a docker swarm cluster and follow everything in swarm directory step by step and also modified the Makefile in main directory as follow ->

get-example: if [ ! -f example/SparkWriteApplication.jar ]; then \ wget -O example/SparkWriteApplication.jar https://www.dropbox.com/s/7dn0horm8ocbu0p/SparkWriteApplication.jar ; \ fi

example: get-example docker run --rm -it --network workbench --env-file ./swarm/hadoop.env -e SPARK_MASTER=spark://sparm-master:7077 --volume $(shell pwd)/example:/example bde2020/spark-base:2.2.0-hadoop2.8-hive-java8 /spark/bin/spark-submit --master spark://spark-master:7077 /example/SparkWriteApplication.jar docker exec -it namenode hadoop fs -cat /tmp/numbers-as-text/part-00000

and when execute make example it reads the file and give me exception which tells me -> Failed to connect to server: namenode/10.0.0.102:8020: try once and fail. java.net.ConnectException: Connection refused

I have allowed all of the necessary ports and still cannot connect to namenode.... any suggestions ?

earthquakesan commented 6 years ago

Hi!

The application from the root Makefile is not for swarm. To test it, run one of the provided examples from spark distro.

Also, what I can see from the execution namenode exposes port 9000 by default. Did you change the port?