This is an initial version of dingo_pipe, which is for running Dingo on clusters according to a provided .ini file. The code is heavily based on bilby_pipe.
Main changes
New scripts
dingo_pipe: Main script, which takes a .ini settings file and creates a DAG that calls the other scripts in order.
One must specify the Dingo model (and if necessary initialization model) in the settings file.
In contrast to bilby_pipe, most of the settings for data preparation, detectors, etc., are taken from the Dingo model configuration settings.
dingo_pipe_generation: Downloads / prepares data and PSDs and saves them as an EventDataset HDF5 file.
Note that the PSD estimation now uses Bilby, which in turn uses gwpy. This gives a slightly different PSD than pycbc, which is used in the main Dingo code. As a consequence, the log noise evidence can be different, resulting in different log evidence calculations.
dingo_pipe_sampling: Draws samples from a Dingo model using a provided EventDataset as context. Saves the Result in HDF5 format.
dingo_pipe_importance_sampling: Performs importance sampling based on a provided EventDataset and proposal Result.
Set the number of processes using the request_cpus_importance_sampling key.
Other changes
There is a new EventDataset class, which stores ASDs and frequency-domain strain in a way suitable for use with a Dingo model.
Comments
I am preparing the code by starting from commented bilby_pipe code, and carefully uncommenting features and building new ones. This way most of the bilby_pipe functionality remains the same, but it gets adapted as necessary for Dingo. The large number of keys can be listed by calling
dingo_pipe --help
If a new features is desired, I good way to start is by finding the relevant lines and uncommenting them.
Still to-do
Ultimately the .ini file should be generated by Asimov. This needs to be set up.
Features needed:
Load data from frame files, set up injections.
Enable parallelization of importance sampling across jobs.
Allow for changes to settings (data conditioning, prior, etc.) during importance sampling. Enable detector calibration marginalization.
This is an initial version of
dingo_pipe
, which is for running Dingo on clusters according to a provided.ini
file. The code is heavily based onbilby_pipe
.Main changes
New scripts
dingo_pipe
: Main script, which takes a.ini
settings file and creates a DAG that calls the other scripts in order.bilby_pipe
, most of the settings for data preparation, detectors, etc., are taken from the Dingo model configuration settings.dingo_pipe_generation
: Downloads / prepares data and PSDs and saves them as an EventDataset HDF5 file.gwpy
. This gives a slightly different PSD thanpycbc
, which is used in the main Dingo code. As a consequence, the log noise evidence can be different, resulting in different log evidence calculations.dingo_pipe_sampling
: Draws samples from a Dingo model using a provided EventDataset as context. Saves the Result in HDF5 format.dingo_pipe_importance_sampling
: Performs importance sampling based on a provided EventDataset and proposal Result.request_cpus_importance_sampling
key.Other changes
EventDataset
class, which stores ASDs and frequency-domain strain in a way suitable for use with a Dingo model.Comments
I am preparing the code by starting from commented
bilby_pipe
code, and carefully uncommenting features and building new ones. This way most of thebilby_pipe
functionality remains the same, but it gets adapted as necessary for Dingo. The large number of keys can be listed by callingIf a new features is desired, I good way to start is by finding the relevant lines and uncommenting them.
Still to-do
.ini
file should be generated by Asimov. This needs to be set up.pesummary
pages.