Closed JanHasenauer closed 6 years ago
That's quite worrying. We're looking into it.
I second that. I also had the feeling, but no concrete evidence, that something is not going as smooth as it used to.
I think this particular issue is related to pre-equilibration. In the old version, a simulation where pre-equilibration had failed due to an excessive number of attempts to lengthen the equilibration time would erroneously still be used. This was a pretty serious bug that was fixed in commit efa52c8a2c8e94b68fdaf51bcbbff01b7375d744 .
In the new version, most of the LHS samples fail on the initial simulation already, failing to equilibrate cAMP. If I look at the initial samples when running the old version, they are not appropriately equilibrated, yet no error is thrown.
>> ar.model.ss_condition.dxdt<ar.config.eq_tol
ans =
1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Ah, okay. Thanks for checking!
Would it be possible to implement a relative and an absolute tolerance? I think the issue might be that the cAMP concentration can become rather large.
Yeah, there is an (optional) one in the latest version: ar.config.eq_rtol. :) Acceptance is decided by an or rule (when either abstol or rtol pass, the state is considered to be equilibrated).
And happy to help!
I'm assuming this one is resolved.
I noticed a substantial drop in the convergence rate for the Isensee_JBC2018 model. The original fitting was ran using an old D2D version (c_version_code: 'code_160823'). The optimisation completed for almost 100% of the runs. For the current D2D version, only 20% of the runs finish successfully. Accordingly, also convergence became worst.
As the difference between the versions is huge, it did not figure out the reason yet. However, due to the substantial difference, it might be worth to have a look.