lightbend / fdp-sample-applications

All sample applications for Fast Data Platform
Apache License 2.0
15 stars 6 forks source link

Unable to deploy killrweather app to DC/OS due to invalid JSON #13

Open laszlovandenhoek opened 5 years ago

laszlovandenhoek commented 5 years ago

I'm currently evaluating FDP. In the process, I have attempted to install the killrweather app as described here: https://developer.lightbend.com/docs/fast-data-platform/1.3.1/user-guide/sample-apps/index.html#fast-data-platform-killrweather

The documentation points to the README.MD in this repository: https://github.com/lightbend/fdp-sample-applications/blob/develop/apps/killrweather/README.md

That file instructs me to deploy the killrweather app using this command: dcos marathon app add killrweather-app/src/main/resources/killrweatherAppDocker.json I'm assuming this refers to https://github.com/lightbend/fdp-sample-applications/blob/develop/apps/killrweather/source/core/killrweather-app/src/main/resources/killrweatherAppDocker.json. I think that's right, because installing the loader worked with a small modification (see below).

However, there are a couple of issues with this JSON file:

Indeed, the killrweatherloaderDocker.json specifies a containers array with two extensive elements, whereas killrweatherAppDocker.json only has a rather simplistic container object.

I am unfamiliar with DC/OS, so I am unsure how to fix this.

blublinsky commented 5 years ago

László, thank you, for your comment on missing "'," After fixing it, the json { "id": "/killrweatherapp", "backoffFactor": 1.15, "backoffSeconds": 1, "cmd": "export HADOOP_CONF_DIR=$MESOS_SANDBOX && /opt/spark/dist/bin/spark-submit --master mesos://leader.mesos:5050 --deploy-mode client --conf spark.mesos.executor.docker.image=lightbend/killrweatherapp:1.3.0 --conf spark.mesos.uris=http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/core-site.xml,http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/hdfs-site.xml --conf spark.executor.memory=2g --conf spark.executor.cores=2 --conf spark.cores.max=6 --conf driver.memory=2g --class com.lightbend.killrweather.app.KillrWeather --conf 'spark.driver.extraJavaOptions=-Dconfig.resource=cluster.conf' --conf 'spark.executor.extraJavaOptions=-Dconfig.resource=cluster.conf' local:///opt/spark/jars/killrWeatherApp-assembly-1.3.0.jar", "container": { "type": "DOCKER", "volumes": [], "docker": { "image": "lightbend/fdp-killrweather-app:1.3.0", "forcePullImage": true, "privileged": false, "parameters": [] } }, "fetch": [ { "uri": "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/hdfs-site.xml", "extract": true, "executable": false, "cache": false }, { "uri": "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/core-site.xml", "extract": true, "executable": false, "cache": false } ], "maxLaunchDelaySeconds": 3600, "instances": 1, "cpus": 2, "mem": 2048, "disk": 1024, "gpus": 0, "networks": [ { "mode": "host" } ], "portDefinitions": [], "requirePorts": false, "upgradeStrategy": { "maximumOverCapacity": 1, "minimumHealthCapacity": 1 }, "killSelection": "YOUNGEST_FIRST", "unreachableStrategy": { "inactiveAfterSeconds": 0, "expungeAfterSeconds": 0 }, "environment": { "KAFKA_BROKERS": "broker.kafka.l4lb.thisdcos.directory:9092", "CASSANDRA_HOSTS" : "node-0-server.cassandra.autoip.dcos.thisdcos.directory, node-1-server.cassandra.autoip.dcos.thisdcos.directory, node-2-server.cassandra.autoip.dcos.thisdcos.directory", "GRAFANA_HOST": "grafana.marathon.l4lb.thisdcos.directory", "GRAFANA_PORT": "3000", "INFLUXDB_HOST": "influxdb.marathon.l4lb.thisdcos.directory", "INFLUXDB_PORT": "8086" } } deploys successfully on our servers I am fixing it in the develop branch

laszlovandenhoek commented 5 years ago

Hi Boris,

thanks for getting back in such a timely manner. I didn't get the demo to work yet, due to some hostnames not matching - this might be carelessness on my end, so let me try and work through that myself first.

For the record, the original issue I reported above was caused by using DC/OS 1.12. While the FDP documentation "recommends" 1.11, this demo won't work with later versions as-is. You might want to update the documentation at https://developer.lightbend.com/docs/fast-data-platform/1.3.1/cluster-recommendations/index.html to reflect that fact, or upgrade the code to be forward-compatible.