Closed rcannood closed 9 months ago
I found this case study and the corresponding documentation in the "Get started" page of simChef. This confirms my previous suspicions of what simChef is and does.
I also found a response to my question on the vignette related to computing experimental replicates in parallel. I'll create a separate issue to follow up on this.
Closing this issue since I found answers to my own questions :)
@rcannood we'd love to see simChef
used in conjunction with cool packages like dyngen
!
Hi @tiffanymtang @jpdunc23 @PhilBoileau !
I'm still in the process of reviewing the JOSS submission in https://github.com/openjournals/joss-reviews/issues/6156. One of the things I'm struggling to understand a little bit is what
simChef
's core functionality is.Is simChef designed to be able to run benchmarking experiments using simulation models, methods and metrics? The core functionality of the method is then not the DGPs, Methods and Evaluators themselves, but rather simChef is the glue that allows users to perform and visualise this benchmark?
Just so I can understand it better; would dyngen be what you consider a DGP. If you look at Supp. Fig. 3 (see below), would AUROC and AUPR then be the Evaluators and SSN, LIONESS and PySCENIC be considered methods?
If this is indeed the case, what could make it easier for me to understand the aim of simChef is to refer to a study where simChef is being used, or showcase a real-life use case for simChef. I know the manuscript already has R code to showcase what simChef can do, but based on this example I find it hard to understand everything.
Is it only possible to evaluate all of the executions with
future
? Do I understand correctly that this framework only allows executing code locally -- as opposed to on an HPC or other cloud infrastructure?What worries me a little bit is that a benchmark of trajectory inference methods I did had so many methods and evaluators that it was not possible to run this on a single computer in a decent amount of time.