One QIIME 2 analysis can be performed across many different QIIME 2 deployments (in multiple computing environments). Reproducibility is challenging in this context, and automated corroboration may not be possible. Tools for automating the management of QIIME 2 environments during replay are likely complex/expensive to build, but might be worthwhile at some point. One approach could look like this.
[ ] replay produces a requirements spec for each QIIME 2 environment with unique ids for env names
[ ] replay generates environments with these specs (maybe with help from a library-provided API?)
[ ] replay includes conda (or docker) env activation and deactivation commands in its output scripts, so that the correct environment is used for each action
As a real bonus, this could decouple replay from the PluginManager, which might make replay easier or more effective in environments without QIIME 2 deployments (e.g. q2view)
One QIIME 2 analysis can be performed across many different QIIME 2 deployments (in multiple computing environments). Reproducibility is challenging in this context, and automated corroboration may not be possible. Tools for automating the management of QIIME 2 environments during replay are likely complex/expensive to build, but might be worthwhile at some point. One approach could look like this.
As a real bonus, this could decouple replay from the PluginManager, which might make replay easier or more effective in environments without QIIME 2 deployments (e.g. q2view)