Open zengqicheng opened 7 years ago
Actually you've started it successfully, just wait some minutes for tomcat start up, you can follow the next steps. The failure of launching spark worker comes from manually starting spark cluster by script, in this simple docker we only have one master node without any worker node. To avoid this error we've also modified our script file in docker and rebuild this image, you can also pull it again and run, this failure will no longer come out. Thanks for your questions, and hope you enjoy it.
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark--org.apache.spark.deploy.master.Master-1-sandbox.out localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-sandbox.out localhost: failed to launch org.apache.spark.deploy.worker.Worker: localhost: at java.lang.ClassLoader.loadClass(libgcj.so.10) localhost: at gnu.java.lang.MainThread.run(libgcj.so.10) localhost: full log in /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-sandbox.out about to fork child process, waiting until server is ready for connections.