Open nikola-rados opened 4 years ago
Do you have a recommended course of action? The docs for subprocess.call()
mention that it's an old API and subprocess.run()
should be used.
Generally, using the shell=True
argument preserves the caller's environment (which presumably would allow the callee to find the netCDF4
module if it's available to the caller).
I don't have a recommended course of action but your suggestion of shell=True
may do it. I think that is probably the best place to start.
Alternately, we could consider moving decompose_flow_vectors.py
to a different repository. It was written to generate versions of Hydrology's flow vector files that could be rendered by ncWMS as maps with arrows pointing from grid cell to grid cell where water flows. It hasn't become part of our primary data workflow, and might fit better in data-prep-actions with the rest of the standalone data processing scripts.
That's an interesting idea, I was not aware of the scripts origins. If you are suggesting a more to data-prep-actions does that mean the script really only needs to be documented and not tested?
decompose_flow_vectors
predates Rod's excellent idea to establish data-prep-actions. Accordingly, it is more polished than usual for that repository, with tests and installation guides. It's limited usage, like the data prep actions scripts, but polished and tested, like the climate explorer data prep scripts. Could probably reasonably go in either location.
In
tests/test_decompose_flow_vectors.py
subprocess.call()
is used to run other scripts. This is causing issues in the jenkins pipeline. In particular jenkins cannot find thenetCDF4
module once it opens a subprocess. If we could find another way to test the scripts we can introduce them into the pipeline. Until then they will be excluded.