Closed paidforby closed 5 years ago
Let's
Here's what it looks like we've been doing:
http://builds.sudomesh.org/builds/sudowrt-firmware/{artifact}
http://builds.sudomesh.org/builds/sudowrt/{codeName}/{version}/{artifact}
rsync
, e.g. https://github.com/sudomesh/sudowrt-firmware/blob/master/send_to_webserverAnyone have thoughts on this? I have a couple
https://builds.sudomesh.org/{project}/{version}/{artifact}
, e.g. https://builds.sudomesh.org/sudowrt-firmware/0.3.0/{artifact}
https://builds.sudomesh.org/sudowrt-firmware/0.3.0-rc.1/{artifact}
so everyone could help test @paidforby's hot new stuff.Great breakdown @gobengo! As far as I know, there has never been a consistent deployment process, so I think no one is attached to any particular process.
For (at least) the last year the process has gone something like;
I introduced the codenames to the builds server to help differentiate the new autoconf builds from the makenode builds, previously the codename had been the same as openwrt, 'chaos_calmer,' which seemed even less helpful. I agree, no need for codenames in the build server, though it doesn't seem too different from putting '0.3.x' or '0.3.0' as the directory name.
Love the idea for release candidate builds, I tried to do something similar with http://builds.sudomesh.org/dev-builds/
but it became confusing and unmaintained very quickly.
My ideal dev cycle would read something like this.
builds.sudowrt.org/builds/sudowrt/0.3.0/latest
I don't think space is a concern on the sudomesh server, now that we are only building N600 firmware, though someone should check that.
I'm guessing we can push the build similar to how we a pushing to docker hub, though we may need to create a dummy user/pw on the sudomesh server for travis.
Hope this info helps, @gobengo I encourage you to go ahead and restructure the deployment process however you see fit, let me know if you need any access that you should already have. Thanks!
After ten or so commits, I think I've finally got the firmware binaries deploying to https://builds.sudomesh.org/sudowrt-firmware/latest. I based the deployment off of this guide and the old send_to_webserver script. It appears to work well, but we should keep an eye on it as it starts pushing nightly builds.
And as suggested by @gobengo I've reworked the directory structure for the firmware, check it out here, http://builds.sudomesh.org/sudowrt-firmware/. I left the old directory in place until we update links in the wiki.
Other thought, is there any point of committing the docker image back to docker hub? Now that we can extract the few files we want, maybe it makes sense to only update the docker image once the build time becomes too long? Not recommitting to docker hub might also address the bloated images described in #146.
Mentioned wrong issue in commit, see ae74d966d0cf81a29d816eb04bb7ac17a853507e for docker hub related commit
This appears to be working consistently. It also helped us detect other, unrelated, issues with the latest build.
I left the old directory in place until we update links in the wiki.
I updated the links in the wiki and moved the old build directories one level up. If nobody complains that they're missing in the next month, I'll delete 'em.
After the developments in #137 we have the full build finishing in Travis. This already runs every night and contains the binaries for the N600, so there should be a way to push these builds somewhere useful. I'm guessing we could adapt the already existing send_to_webserver script. Not sure if we can or should use Zenodo, or just https://builds.sudomesh.org?