Benchmarking-Initiative / Benchmark-Models-PEtab

A collection of mathematical models with experimental data in the PEtab format as benchmark problems in order to evaluate new and existing methodologies for data-based modelling
BSD 3-Clause "New" or "Revised" License
28 stars 16 forks source link

Add Laske et al. (2019) #83

Closed JanHasenauer closed 2 years ago

JanHasenauer commented 4 years ago

https://journals.plos.org/ploscompbiol/article?rev=1&id=10.1371/journal.pcbi.1006944

Complexity estimate: medium

lcontento commented 3 years ago

The model as published is clearly wrong (one of their observables is a percentage which can go above 100%) and trying to correct it results in different estimates for the parameters (mostly similar, except for one parameter that becomes 10x, but still in the reasonable range as discerned from the literature). Question: In such cases what is the policy for inclusion in the benchmark collection? Include the original model, the corrected one or both? I would go for including only the original one, but I wanted to know if there have been precedents and/or what the general opinion is.

JanHasenauer commented 3 years ago

thanks for letting us know.

the question is obviously difficult, as

  1. we do not want to spread wrong models and as
  2. we want to use here only published models

in my opinion the bet strategy would be to make the authors aware of the problem and hope that they respond and update it. we could then include the update model.

lcontento commented 3 years ago

They don't publish their code (use the same published model from another paper, but have their own mappings from the model to their measurements), so even if we notify them (but I guess they are already aware that what they used is theoretically wrong and just to be considered as an approximation) they would have nothing to update.

dilpath commented 2 years ago

closed in #127