Closed skonto closed 5 years ago
@EronWright I updated the PR. Will test and report back.
@EronWright fixed. Here is the guide how to run an example with the default settings: https://gist.github.com/skonto/c8404d43142070dca860dda431459887 I will add an example to the example repos.
@EronWright is this good to go? I would like include it in the Flink 1.3.2 release of the package.
@EronWright @joerg84 how do you want me to proceed?
@EronWright @joerg84 gentle ping...
Are we planning to merge that? this will be useful for our team as well?
@joerg84 gentle ping.
@joerg84 gentle ping...
closing this.
Fixes #7, relates to https://github.com/mesosphere/universe/pull/1163
The mesosphere image needs to be built and pushed. Don't have access. Assumes hdfs is there for the default case (any proper setup should have HA enabled by default) and the storage dir (hdfs://hdfs/flink/recovery) to have been created. In order to create the dir you need to download the appropriate hdfs distro within your dcos cluster and execute:
dcos hdfs endpoints core-site.xml
dcos hdfs endpoints hdfs-site.xml
to download the required files, to override the defaults within the distro so that finally you can issue commands like:
hadoop fs -mkdir -p ...
In order to test it after you build the image under your own account, launch flink with HA, use the following command within a docker container of the flink docker image:
./bin/flink run -z /default ./examples/batch/WordCount.jar --input /etc/resolv.conf
Also you need to add to the flink-conf.yaml under the /flink-1.2.0 folder within the container the following settings: high-availability: zookeeper high-availability.zookeeper.quorum: master.mesos:2181 high-availability.zookeeper.storageDir: hdfs://hdfs/flink/recovery