Closed tmckayus closed 6 years ago
rebased on the metrics change ...
@mattf, fyi, still investigating, but follow up suggests that certain other signals (for example, HUP and QUIT) should be trapped in a different handler and simply passed through to spark running in the background so that spark can react the way it does natively.
HUP for instance will cause spark to exit, so in our case spark should exit as well. This will make the launch.sh script end. In the case of s2i scripts, it will cause spark-submit to return (I think), which will effectively end the app.
As it is here, we've backgrounded spark but not passed through any other signals, we've effectively screened them out.
This PR should be closed in favor of https://github.com/radanalyticsio/openshift-spark/pull/37
Since the final action in launch.sh is to exec the spark script, we do not need to run the spark script in the background and wait with a signal handler. Bash or spark itself will receive signals and handle signals as long as tini/launch.sh are invoked correctly as above.
In order to terminate spark pods quickly, we need signal handlers for SIGTERM and SIGINT. Additionally, we should run the spark processes in the background and use "wait", recording the PID so that we can issue an immediate kill to spark.
This is the same strategy we use in the driver.