Closed robwilkerson closed 8 years ago
Well, all of your recent failures were the first to run on a particular build node, but that is quite likely just a coincidence (and was not the case on Friday). It does mean, however, that the failures have nothing to do with anything that might have been left running from a previous build. And your shippable.yml doesn't show anything running outside the container that could carry over to the next build.
It looks like you are getting two errors. One setting up your database that looks similar to this: http://stackoverflow.com/questions/16594672/1452-cannot-add-or-update-a-child-row-a-foreign-key-constraint-fails and one of your tests timing out. Hopefully the Stack Overflow link will help with the database. For the test time out, does that test typically take most of the time allowed? Or is it possible that it's trying to contact something that is still starting up when the test starts?
Would you mind taking a look at a build that ran last night? This is one of those where I don't see any reason for the failure, but once I hit rebuild, everything was fine. When I look at the output of the failed build, I don't see anything that indicates a problem. Is it there and I'm not recognizing it for what it is? It'd be great if I'm just not reading the output correctly.
Thanks for your time.
The logs just show a test failure. I couldn't find anything about why it failed in the logs. It would probably be worth checking anything asynchronous in that test (or the set-up of the tests) to see if it could be two operations that ended in an unexpected order.
Hmmm. Okay. Tests run fine locally and on my dev server so I guess I just can't figure out what it is about the CI environment that's so dramatically different. Thanks for having a look.
Description of your issue:
I've been noticing that I get a lot of build failures recently. Certainly a few of those are legitimate, but often simply hitting the Rebuild button will result in the build succeeding. Sometimes I have to rebuild twice, but the point is that no changes have been made to the app, the build or the tests.
One example is https://app.shippable.com/runs/574254b3d388860c00d68673. In this case, I had to rebuild twice, but again, no changes were made that should've impacted the build status. This leads me to think it might be in the environment that gets spun up.
Could this be due to some nuanced issues with my shippable.yml file? Is there anything I can do to improve the stability/reliability of the spin-up process? When builds report a failure, I drop everything to fix them, but the number of false alarms is becoming a bit frustrating. I have to believe the problem is likely on my end, but I have no idea how to improve the situation.