Closed WardBrian closed 1 month ago
Hey @serban-nicusor-toptal - could you tell if anything is different from these runs and https://jenkins.flatironinstitute.org/blue/organizations/jenkins/Stan%2FStanc3/detail/PR-1396/1/pipeline/311/?
As far as I can tell, nothing in the code has changed to cause the end-to-end at O1 to fail
I think the difference is that the successful one ran on jenkins2
and the failed one on jenkins
agent.
This can be verified by changing this label here from linux
to linux && mesa
(for jenkins2)
https://github.com/stan-dev/stanc3/blob/master/Jenkinsfile#L508
@serban-nicusor-toptal failed even on jenkins2, any other leads?
I ran it once on jenkins
to be sure.
Well, we know that the dependencies don't change because it runs inside docker image 'stanorg/ci:gpu'
.
And it does pull the correct commit https://github.com/stan-dev/stanc3/commit/b85dab87595023614ea2bbc822663054311b7fcf then it stashes it right away.
I was looking at this diff but it isn't helpful https://github.com/stan-dev/stanc3/compare/771f3722c1072ed3aaace22fbc6eadfd7c98c1ea...b85dab87595023614ea2bbc822663054311b7fcf
Can this be a result of a hardware change ? Do we know what could make it fail from a machine/infra perspective?
I agree the diff is not very useful, but I also diffed the generated code from the current master
from the code generated by the 2.34.0 binary and there are not any differences in how the optimizations generate code
Looking at the docker image, I see that gpu
is a bit older than gpu-cpp17
https://hub.docker.com/layers/stanorg/ci/gpu-cpp17/images/sha256-f5f87c58cf7809f76c851e94b0e7919b95236f327fe402e75ffcf175a0f9f6e9?context=explore vs https://hub.docker.com/layers/stanorg/ci/gpu/images/sha256-1760e2bea62fc914f0d4ee667e8be0544a27d0d2264104b2b60b3b030c256f91?context=explore
Tho it looks like gpu
is the one used in the successful pipeline too.
Any idea about what else I can look into to try and track this down ?
I'm guessing it might have been an underlying hardware change. Unless something changed in Math and we missed it? @andrjohns has there been any movement in the exp
, fma
functions or the gamma
, weibull
, bernoulli
, or normal
distributions? I feel like I would have seen that
Hey @dylex - has the jenkins hardware changed appreciably since ~Jan 31? We're seeing some different numerical behavior compared to then, even if we try older versions of our code
I believe the last hardware change to jenknis was Nov last year, when we upgraded the jenkins control node.
Seems like this resolved itself?
That is for sure weird but I'm glad it's working now.
1422 is failing for a reason I suspect is unrelated to the changes there so I wanted to run CI independently.