Open sckott opened 10 years ago
I would add OpenCPU to the list of potential options that can be used there. You can think of a dashboarding app that uses rOpenSci packages to display status. @jeroenooms might have more suggestions on this.
Good idea
I'd recommend sticking with travis - the community is building up considerable experience with how to use R and travis together, and there's obviously a lot going on in other PL communities too.
Travis has an API that allows to you restart builds (https://api.travis-ci.org/docs/#Builds), so if you're concerned about changes in the underlying API, I'd recommend setting up a chron job that restarts the last build once a week or so.
Good idea @hadley . I didn't know about the restarts. I'll have a look at that. So you don't do ec2 testing anymore?
I do all testing on travis. But @wch has a script that runs on EC2 for testing all cran dependencies of a package prior to release.
It's not exactly a script, but rather a set of commands I've saved in a text file. It requires a fair bit of manual intervention at this point, but it could streamlined.
I agree that the big community and experience of travis CI brings great value. They're not mutually exclusive though. You can use several CI systems, as they might be able to detect different problems. So why not experiment with both. Some benefits of opencpu:
I like the Travis solution, but I agree with @jeroenooms that it doesn't have to be an either-or solution. In any case, it doesn't hurt at least developing a basic proof-of-concept.
@wch Right, I think I remember now I asked for that code before, Seems like sticking with Travis might be best anyway.
Thanks @jeroenooms for the feedback. There are many good options to consider.
Somewhat related, I now have staticdocs setup to automatically use itself to make a package website that is updated every time I push to github: http://staticdocs.had.co.nz/dev/. It's also easy to configure travis to only push on tagged releases for a non-dev site.
On a related note, I have another package under development that will automate package website production and deployment, with support for online playground for examples (like here, using Slidify and OpenCPU.
Maybe what we need in a system that monitors the API's continually. We sort of have that with the dashboard, but it might be worth it to just have a system that pings the API's and records status with a log, something like: https://status.github.com/. We could even hook it up to a tweetbot so we could send out real-time messages saying if an API is down or not.
@ramnathv That looks really promising. But I also :+1: the idea that we don't necessarily have to be in an either or situation. We can definitely keep using Travis, especially as it continues to improve. I'm also keen on the idea of some sort of cron job that runs weekly tests on a EC2 box and pings us when things aren't working.
Maybe what we need in a system that monitors the API's continually.
That's seems way over the top. @sckott has a draft of a API status dashboard that pings periodically (and it looks a lot like the GitHub status page). But still you're raising a different issue. An API being up is different than changes to the API that can break function calls.
Given that our packages not only depend on our code working, but external web APIs beyond our control, I think we need automated continuous testing. Travis-CI is great, but that only tests on new commits, branches, tags, pull requests, etc.
As an example, I noticed today that one of our packages probably hasn't been working for a while, but I didn't notice until I tried to make sure our website tutorial was up to date. Something changed on the server side for API provider, breaking our code.
Is there a way to use Travis-CI to do automated testing once a week or more often just to make sure everything is running smoothly?
I think @hadley has a system to automate testing/installs of his github repos on ec2. Perhaps we can learn from that.
The other solution I suppose is if we have a dashboard that dogfoods our packages using R pinging APIs at some interval, then we should see when there are problems. See https://github.com/ropensci/hackathon/issues/3