Closed MishaZakharchanka closed 1 year ago
@mtitov Two things that I wanted to ask about:
And maybe we would like to add more stages, so that setup, compile, and update are not all on the same stage.
@MishaZakharchanka hi Misha, thank you for preparing this PR promptly!! I've updated it accordingly, thus all three LLNL pipelines will be used as examples for other facilities
Do we need all of the different jobs for cleanup (ie. on_compile_failure, on_build_failure) or would the one at the end be enough?
I would think that it should be one job, since it is about the whole installation of spack, and it wouldn't depend on the status of jobs from each machine (and it is similar to pip
and conda
pipelines)
The setup and update jobs that only need to run on one system, do we want to keep those on Ruby? A problem I can see with something like this is, if ruby goes down then all the tests fail.
yeah, at this point we choose a specific machine, maybe later we can extend it and run the same job on other machine in case of failure -> would be good to have a corresponding template, similar to what you came with .<template_job>
, thus we can apply it to all "independent" jobs (including {spack,pip,conda}_cleanup
)
And maybe we would like to add more stages, so that setup, compile, and update are not all on the same stage.
for now it's ok, setup
stage is about one time processes (in spack it includes spack_setup
, speck_env_setup
), and build
stage is about SDK packages re-installation (and also includes spack_update
)
p.s. I'll check how this spack-pipeline will run and then will merge this PR
This PR refactors the spack CI for LLNL to have the same structure as conda and pip CI. It doesn't address the issues with the spack CI, just the layout of the yaml file.