Benchmarking-Initiative / Benchmark-Models-PEtab

A collection of mathematical models with experimental data in the PEtab format as benchmark problems in order to evaluate new and existing methodologies for data-based modelling
BSD 3-Clause "New" or "Revised" License
26 stars 14 forks source link

Smith_BMCSystBiol2013 #180

Closed FFroehlich closed 1 year ago

FFroehlich commented 1 year ago

fixes #177

draft implementation. Includes scripts to generate petab file. Still requires validation in simulation experiments (simulation results are specified in the supplement.

Requires clarification if petab condition table should provide concentrations/amounts first.

FFroehlich commented 1 year ago

validated simulation against references provided in supplementary material (available in sim_test directory) using AMICI (requires fix for PEtab import). Satisfying general agreement (rtol=1e-1/1e-2, atol=1e-3). Only notable differences are in simulations for figure 2H, in species related to SOD2/InR transcription. Looks like transcription rates are off by a factor 0.8 and 0.4 resepectively and translation/degradation rate is off by a factor 0.9 for InR. Unclear why though, likely there were some changes to the model itself, impossible to track down now though, since these rates are implemented as local parameters and values are thus not reported in simulation files. Decided to therefore keep this as is, as impact on remainder of model seems negligible.

dilpath commented 1 year ago

Please also update the README, e.g. by running python overview.py --markdown and copying the output there.

https://github.com/Benchmarking-Initiative/Benchmark-Models-PEtab/blob/master/scripts/overview.py

FFroehlich commented 1 year ago

Please also update the README, e.g. by running python overview.py --markdown and copying the output there.

https://github.com/Benchmarking-Initiative/Benchmark-Models-PEtab/blob/master/scripts/overview.py

done

dilpath commented 1 year ago

Thanks for the changes! Did you see these points? https://github.com/Benchmarking-Initiative/Benchmark-Models-PEtab/pull/180#pullrequestreview-1331830959

e.g. if the second PEtab is for validation, is it possible to simplify the second PEtab problem to just be extra measurement file(s) and PEtab YAML files, like done for Fujita? https://github.com/Benchmarking-Initiative/Benchmark-Models-PEtab/tree/master/Benchmark-Models/Fujita_SciSignal2010/test_data

I guess this makes validation easiest, since then e.g. user's workflows would just involve replacing the measurements, rather than compiling a second model.

FFroehlich commented 1 year ago

Sorry missed this comment.

Overall, thanks a lot, clearly a lot of work to curate this!

  • Currently, this is larger in filesize than the Froehlich_CellSystems2018 model. The simulation file is 39MB. Can this be reduced by simulating fewer timepoints, or do you think the current state is best?

The simulation file was actually not simulated by me, but are files from the original publication that were used for visualization.

  • Regarding the sim_test

    • Why are there two PEtab problems, and what are the differences between them?

It's also different conditions (additional conditions and additional columns).

  • Rename measurement to simulation in the simulation file, and rename the file to e.g. simulatedData_Smith_BMCSystBiol2013.tsv

Hmm, but those are not actual simulations from the PEtab model, so this might be confusing. I could also provide additional simulation results files.

  • Could you provide a visualization PEtab file?

Will try.

Thanks for providing some tips on working with this model. Currently, there is no expected way to share this information, but putting such things in some README.md would be great, if you think it would be useful (completely optional).

Okay will add a readme file with some additional information.

FFroehlich commented 1 year ago

Thanks for the changes! Did you see these points? #180 (review)

e.g. if the second PEtab is for validation, is it possible to simplify the second PEtab problem to just be extra measurement file(s) and PEtab YAML files, like done for Fujita? https://github.com/Benchmarking-Initiative/Benchmark-Models-PEtab/tree/master/Benchmark-Models/Fujita_SciSignal2010/test_data

I guess this makes validation easiest, since then e.g. user's workflows would just involve replacing the measurements, rather than compiling a second model.

Can try to simplify, main limitation was that there are not simple routines to write petab problems to files where some of the files already exist (but maybe I should just overwrite them).

dilpath commented 1 year ago

Thanks, as the second PEtab problem is to validate this implementation against the original implementation, I think what you've done already is sufficient. Visualization file(s) and a short readme would be great!

It's clear from the script that you use the same model file in both PEtab problems, but please include the similarity/difference as a note in the readme.

FFroehlich commented 1 year ago

Thanks, as the second PEtab problem is to validate this implementation against the original implementation, I think what you've done already is sufficient. Visualization file(s) and a short readme would be great!

It's clear from the script that you use the same model file in both PEtab problems, but please include the similarity/difference as a note in the readme.

both done

FFroehlich commented 1 year ago

looks like poetry/pre-commit hooks are broken, but I don't think this is an issue on my end