changed gen_random_stages, as the previous version caused longer and longer randomly determined trial durations with more events (every gamma had a random duration between 0 and mean_d, the more stages that were estimated, the higher the sum of this number). The new version sets random moments for events between 0 and mean_d, and then calculates the corresponding gamma parameters. It takes into account minimum stage duration, and make sure that stages are at least 2 samples in length in addition to that.
finally, I attempted to fix our ongoing warning (#38), as backward estimation throws an error if I used random starting points. I think that is due to #38 again, which is why I first updated gen_random_stages, which indeed reduced the number of times the warning occured, but didn't eliminate it. Second, I tried to catch our division-by-0 issue, see commented-out lines 404-408 in models.py (the np.clip in 402 avoids -0 values, whatever these are). However, this solution still gives me errors, so I suspect we should set lkh and eventprobs in a different way (among others: Convergence failed: one stage has been found to be null). I can try around further, but maybe @GWeindel knows directly what we should set those variables to.
Clip might me marginally faster, but mainly because while setting something to 0 resulted in some -0's, clipping only gave +0's. Not sure if it matters.
made some updates to tutorial 3
changed
gen_random_stages
, as the previous version caused longer and longer randomly determined trial durations with more events (every gamma had a random duration between 0 andmean_d
, the more stages that were estimated, the higher the sum of this number). The new version sets random moments for events between 0 andmean_d
, and then calculates the corresponding gamma parameters. It takes into account minimum stage duration, and make sure that stages are at least 2 samples in length in addition to that.finally, I attempted to fix our ongoing warning (#38), as backward estimation throws an error if I used random starting points. I think that is due to #38 again, which is why I first updated
gen_random_stages
, which indeed reduced the number of times the warning occured, but didn't eliminate it. Second, I tried to catch our division-by-0 issue, see commented-out lines 404-408 in models.py (the np.clip in 402 avoids -0 values, whatever these are). However, this solution still gives me errors, so I suspect we should setlkh
andeventprobs
in a different way (among others: Convergence failed: one stage has been found to be null). I can try around further, but maybe @GWeindel knows directly what we should set those variables to.I do think the current version is safe to merge.