It looks like starting in Spark 3.1 (?) calling pyspark will specifically look for a python3 executable. Flintrock should ensure this executable is available on the cluster.
It's clear that the FlintrockService abstraction needs to be reused to capture things like Java and Python configuration, instead of doing that in a haphazard way. I'll explore that in a future PR (unless a contributor happens to take interest in this issue).
It looks like starting in Spark 3.1 (?) calling
pyspark
will specifically look for apython3
executable. Flintrock should ensure this executable is available on the cluster.It's clear that the
FlintrockService
abstraction needs to be reused to capture things like Java and Python configuration, instead of doing that in a haphazard way. I'll explore that in a future PR (unless a contributor happens to take interest in this issue).