Oversight on my part. add-slaves should not have or need the ability to specify a Java version. It should source it from the cluster manifest, guaranteeing that all added slaves have the same configuration as the existing cluster.
This matches the behavior of add-slaves in regards to picking the appropriate version of Hadoop or Spark to install on the new nodes.
Potential future improvement: Make Java a "service" like Hadoop and Spark, and reuse the cluster provisioning abstractions already built for them. Will need to build a way to express service dependencies (e.g. Spark depends on Java) so they get installed in the correct order.
Oversight on my part.
add-slaves
should not have or need the ability to specify a Java version. It should source it from the cluster manifest, guaranteeing that all added slaves have the same configuration as the existing cluster.This matches the behavior of
add-slaves
in regards to picking the appropriate version of Hadoop or Spark to install on the new nodes.Potential future improvement: Make Java a "service" like Hadoop and Spark, and reuse the cluster provisioning abstractions already built for them. Will need to build a way to express service dependencies (e.g. Spark depends on Java) so they get installed in the correct order.
Follow-up to #316.