Closed JanHasenauer closed 2 years ago
The model as published is clearly wrong (one of their observables is a percentage which can go above 100%) and trying to correct it results in different estimates for the parameters (mostly similar, except for one parameter that becomes 10x, but still in the reasonable range as discerned from the literature). Question: In such cases what is the policy for inclusion in the benchmark collection? Include the original model, the corrected one or both? I would go for including only the original one, but I wanted to know if there have been precedents and/or what the general opinion is.
thanks for letting us know.
the question is obviously difficult, as
in my opinion the bet strategy would be to make the authors aware of the problem and hope that they respond and update it. we could then include the update model.
They don't publish their code (use the same published model from another paper, but have their own mappings from the model to their measurements), so even if we notify them (but I guess they are already aware that what they used is theoretically wrong and just to be considered as an approximation) they would have nothing to update.
closed in #127
https://journals.plos.org/ploscompbiol/article?rev=1&id=10.1371/journal.pcbi.1006944
Complexity estimate: medium