wulfy23 / rpi4

OpenWrt full for rpi4
213 stars 24 forks source link

Automate builds & releases using GitHub Actions #8

Open damianperera opened 3 years ago

damianperera commented 3 years ago

Hey,

It looks like you are manually building and pushing the images to GitHub. Since your scripts are version-controlled I think you can automate this similar to what I have done in my repository so that you can offload the compute-intensive process and focus only on the code/scripts.

I'm happy to help if you can share your build process 🙂

wulfy23 commented 3 years ago
  1. holy $#&* your injecting config across repo's... damn that's cool...
  2. offer appreciated... lemme consider all the ramifications / requirements / workflows... key one being... my scripts are almost always unclean / unconventional for the purposes of porting... chances are in the immediate future not too much can be done whollistically... but longer term... i'll definately keep this in mind when i'm making edits or isolating core build functionality...

thankyou

damianperera commented 3 years ago

@wulfy23 yes. GitHub runners are actually just virtual private servers, so anything you do on your local machine can be done on them. I wrote two GitHub actions to mount an image and unmount it afterwards, which I use in this build script.

We can write more actions to abstract common parts of the build process, build the images and release them instead of maintaining the .img.gz files in the repo. Let me know once you standardise the build process, so that I can also help you as well 🙂

wulfy23 commented 3 years ago
  1. I spent an two hours trying to search how to programatically upload / create a 'release' early in the build process... most/if not all documents I found were 'use the browser to upload...', your sample demonstrates otherwise and is most useful to me... If you don't mind... what if any (programatically?) are the benefits of 'releases'? (i'm aware of user notification, space limitation and download format gains)
  2. Now i've thought about this a little... the primary issue i'm seeing is the builds are created(and recreated) with 1.5GB+ imagebuilder dir-structures... again... from the searching i've done... going over 1G on github is a no-no... perhaps 'runner-instances' have broader limitations...? assuming this is the case... when I want to re-use the runner from the last build... will it still be there? similar issue with opkg-mirror.... wget mirroring 907MB worth of packages that are available for conisistent reuse per build afaik is not something that 'runner-virtual-instances' like... ?

regenerating a build takes 2minutes with 1 command locally... nothing is compiled... so there are few resource gains after the primary 1.5GB imagebuilder is created... from my limited perspective... local building is a better approach for this workflow...

one thing that has been on my mind a few times is leasing a VPS/Webspace and moving from github due to space constraints... it's not my intention to overburden a useful resource for which it was not intended... although form follows function... so to date... it works very, very well for me... (apart from the 100MB file size constraint)

more space will facilitate; possibility both master/release base builds... possibility for full opkg repo mirrors for when official repos are unavailable, retention of previous builds... the last point being very useful for snapshot and people who want to revert to say two builds ago... it's likely github 'releases' would allow this... but the corresponding opkg repo's don't work as releases unless logic is introduced to download those locally onto the the router... which would be more reliable... but pose much heavier download / firstboot load on the router...

server side code will also allow for better security, validation and streamlining of the 'auto-update' / 'update-check' featureset...

i'm sure some of these points are likely misinterpretations... but from where I sit / what i've been able to find out... it's the way I currently understand things...

damianperera commented 3 years ago

what if any (programatically?) are the benefits of 'releases'?

  1. You won't be maintaining the build images in Git (which is a good CI/CD practice, since it's actually supposed to be used for maintaining the code which builds your production-ready images).
  2. Consumers of your build can simply go to your Releases page and select a past release if they want to (i.e. your history is preserved).
  3. You can move your change-log to the specific releases, so you don't have to maintain a separate file with all the history. Anyone can scroll through the releases page and see all the different changes that have been included in the releases.
  4. You can specify release candidates as pre-releases so users can easily distinguish between rc and snapshot builds (which are the actual releases).
  5. Most important of all is that you can maintain a clean repository - check out this repo.

perhaps 'runner-instances' have broader limitations...?

Following are the specs of a Github-runner using Linux (ref: About Github-hosted Runners):

when I want to re-use the runner from the last build... will it still be there

The runner will be deleted once your build is complete, however you can reuse resources (e.g. .zip files) across different jobs using artifacts (ref: Storing workflow data as artifacts). If you want to reuse anything between separate builds, you can do something like upload the resources to a S3 bucket or elsewhere and download it once to the new runner.

similar issue with opkg-mirror.... wget mirroring 907MB worth of packages that are available for conisistent reuse per build

We can use caching for this, which will cache the your packages across separate builds (ref: Caching dependencies to speed up workflows).

local building is a better approach for this workflow

This might be true at the moment, but you can get a wider community support (i.e. more people will be willing to contribute) if the build process is transparent and automated.

but the corresponding opkg repo's don't work as releases unless logic is introduced to download those locally onto the the router... which would be more reliable... but pose much heavier download / firstboot load on the router

Do you mean that for your build process to work you need to update your scripts, flash the image to a SD card, boot up a RPi (for opkg to work) and then build the image off of that? If that's the case we might be able to use QEMU to emulate the RPi hardware and boot up the image on a GitHub-runner itself. If hardware-virtualization is not enabled on the GitHub-runner we can consider using Travis CI or Circle CI to build the images (not sure if they support hardware-virtualization though).

wulfy23 commented 3 years ago

firstly... thankyou very much for the input and assistance...

on the last point... no, users would need to download the full opkg combined feed 'release' blob per the master version... during firstboot... this is also resolvable by bundling the repo 'in-image' athough again... github file size and user courtesy (reduce initial download burden and time) go against this format...

"GitHub will remove any cache entries that have not been accessed in over 7 days. There is no limit on the number of caches you can store, but the total size of all caches in a repository is limited to 5 GB"

I'm currently able to (re-)generate and or re-instate any build... with the above limitations... i'd need to ping legacy cache's every 7 days etc. etc. basically... severely unsuitable to the way imagebuilder (for 'builds') that need to be reproducible and supported from master work... However... using a stable release (only 21.02) overcomes most of this including 'strict need' for opkg repos

From your points above... point 3 is VERY useful... the rest all seem to be based on 'code' repositories... this is pretty much a 'giant persistent cache'... unable to relate clear gains/suitability ( i'm not rejecting your points... it is more an observation of the tools and their (understandable) bias toward 'source-code-only' handling...

I've read everything you have written... listened to it... and taken it in... ( but will still take many days to fully absorb ).... so I apologize if my anything seems pessimistic / afraid of change... or whatever... I can assure you this is not the case... and will continue to assimilate your suggestions and learn more on the info you have generously provided...

EDIT: I initially skipped over this comment; "This might be true at the moment, but you can get a wider community support (i.e. more people will be willing to contribute) if the build process is transparent and automated."

this is VERY valid... my "files" structure has been coded to be portable across devices... ( you will see an experimental "ventoy_x64" community build in the devel folder ).... my hopes and wishes from the community have not quite been what i'd hoped to date... this discussion and about 2 more like it... and 5 one line forum posts is all i've got to date... in addition to the comment about switching to stable release... if any decent coders jump onboard the "files" project we can likely switch to master ( buildroot ) and open the whole process up, implemening most of what you've suggested becomes EXTREMELY VIABLE...

FWIW: The build has approx 25-30 (regular) users which has doubled over the last 6months... ( it's not my goal to make it famous... just worth it to see all the sweat for a reason )... and i'm reaching the point where the code is getting stale / needs rewriting in several places... and unsure of just how long or to what extent I can keep pushing on... likely outcome (without qualified help) is 2-3 months max and the build will need to be retired...

damianperera commented 3 years ago

@wulfy23 no worries, and thanks for taking the time 😄 hit me up if you decide you want to go with this approach, and we might be able to schedule a meeting to brainstorm a proper CI/CD pipeline for the project.

wulfy23 commented 3 years ago

again... can't thank you enough for the input... it's broadened my perspective and irregardless of build workflows... you stuff is super cool... let me know of there is anything I can do to better support your workflow...

(i.e. I see your matching rpi4snapshot ... i'll often add an end word so i'll match _extra also) my bad... it's definately something I can work on and seeing your code really helps with how to better format / structure alot of the stuff i'm doing)

Castle67 commented 3 years ago

@wulfy23 can you possibly incorporate the pppoe server services in your next rpi4_snapshot release? Currently I'm using your firmware but the thing is every time i flush the update and restored my config backup the pppoe server apps and configs was not able to load. others are perfectly working after the config backup restoration except this pppoe server. so again i need to re-config the pppoe every after the snapshot updates.