In PR #160 we have included setting the seed in R and the seed passed to brms such that repeat runs of the integration tests produce the same answer.
It is possible that the code for the model changes, and repeat runs of the test will always fail (for example if it's a p-value of 0.01 then this will happen in 1% of cases).
I don't know if there is anything that can be done about this. I'd say that one of these integration tests failing is still a good signal something is up, but if you have really checked then it could just be a random fail.
Could test this with changing the seed and record distribution of test fails.
I've labelled this as "question" but really it's more like "warning" or "something to bear in mind".
In PR #160 we have included setting the seed in R and the seed passed to
brms
such that repeat runs of the integration tests produce the same answer.It is possible that the code for the model changes, and repeat runs of the test will always fail (for example if it's a p-value of 0.01 then this will happen in 1% of cases).
I don't know if there is anything that can be done about this. I'd say that one of these integration tests failing is still a good signal something is up, but if you have really checked then it could just be a random fail.
Could test this with changing the seed and record distribution of test fails.
I've labelled this as "question" but really it's more like "warning" or "something to bear in mind".