Closed stephengreen closed 1 year ago
One question is where the call to graceDB would take place. I think we should follow the Bilby lead on this, so presumably in dingo_pipe
?
To me this sounds like a good split of the code into pieces with different functions that are then combined by dingo_pipe
.
Regarding the call to graceDB, there does not seem to be a natural place for it in any of the component scripts, so, yes, I agree that it would fit best in dingo_pipe
.
Much of this is addressed in #129.
I am trying to decide on the new command-line interface for inference. Here is a proposal, and I am open to suggestions.
Simple / low-level scripts
The following commands perform very specific low-level tasks, and an entire inference run would involve a combination. Individual scripts are needed for all of these for suitably splitting jobs over nodes with condor.
dingo_inference
(GPU)log_prob
.dingo_importance_sample
(CPU)log_prob
included.phase
must be included among parameters, or one must use marginalization).dingo_split_result
anddingo_merge_result
dingo_train_unconditional_model
(GPU)dingo_sample_synthetic_phase
(CPU)Full inference
We would also have a high-level script for carrying out the complete inference task, similar to what we have now.
dingo_pipe
bilby_pipe
..ini
file (produced typically by Asimov) or an equivalent.yaml
file and eitherI'm not sure if we want to have any other intermediate scripts. I think the low-level ones are needed for condor, and the high-level one for convenience. The high-level script would use what we consider to be "best practices" for performing inference (e.g., 3D unconditional flow, not 15D; possibly default settings for this flow) so it would be inherently less flexible than it could be. But we still retain with the low-level scripts + class interface the ability to experiment with new ideas.