samtupy / nvgt

The Nonvisual Gaming Toolkit
https://nvgt.gg
Other
43 stars 27 forks source link

Create workflow for nightly builds #36

Open Mudb0y opened 1 month ago

Mudb0y commented 1 month ago

This uses a verry similar workflow to the release one, but it gets triggered on each push or new pull request to create nightly builds of the engine. All builds get built, packaged and uploaded to GH Artephacts for use with a service such as nightly.link or individual downloads.

patricus3 commented 1 month ago

it's actually the only reason I build from source, to get the latest stuff faster. @samtupy what do you think?

Mudb0y commented 1 month ago

Seems like this isn't quite as ready as I thought, I can't get the final package to get uploaded successfully. If someone could take a look at this and fix it that'd be great.

samtupy commented 1 month ago

Hi, First thanks much for all your work on this, and for the new bash tricks I learned by reading that file! My only concern comes from as you might be able to imagine, github artifacts. So are there any storage limits for these things, and how do I dynamically retrieve these links? I'm still looking at nightly.link to understand it. My biggest concern however is don't we have some sort of 500mb or 1gb storage limit on artifacts? For example the build jobs generate about 60ish mb of data per build, and the installers which use that data then add another I think over 100mb, so we are talking about over 150mb of gh artifacts nightly. Are we sure that this is wise? Perhaps it is, I have little experience with artifacts which is why I'm asking. I've tried several times to look up github artifact storage limits, and the answers are actually vague from a 4 year old issue about people questioning storage limits to people saying it's 1gb and then maybe 2gb for pro accounts or something etc. People are talking about then needing yet another workflow to delete old artifacts and then this stackoverflow question which was updated just 2 months ago which btw mentions a 0.5gb quota sadly just seems disconcerting and single-handedly reduces my trust in github artifacts for this sort of thing. Do we have any actual information on this issue? I have over 800gb available on the ftp server used by the release workflow. It seems like the retention-days argument of the artifact action may be worth looking into. Furthermore, do artifacts allow for any sort of testing that doesn't involve executing the entire workflow over again? For example if I'm only having issues with the final package job, I can simply set if: false in each other job and comment out the needs line in the one I want to test. However I fear that while using artifacts it would be difficult for the one running final package operation to access artifacts from previous failed jobs. It seems like this would dramatically increase the difficulty and time required to test the CI upon any minor change where we just need to make sure that the final package operations work successfully. As such unless someone can ease my concerns about gh artifacts we may switch this to use the existing ftp solution, but to be clear I am certainly open to learn more about these things and only am concerned because I don't want to randomly run into issues a few days after we start using this because of some sort of storage limit. Again though thanks for the great work on this workflow, we'll continue this discussion about whether artifacts or our custom solution are best for this so that we can get it merged soon!

patricus3 commented 1 month ago

foss projects don't have such limits because they can do it as much as they want I had read about it quite a lot on github's stuff.

Lucas18503 commented 1 month ago

More specifically, I believe this is covered here:

GitHub Actions usage is free for standard GitHub-hosted runners in public repositories, and for self-hosted runners. For private repositories, each GitHub account receives a certain amount of free minutes and storage for use with GitHub-hosted runners, depending on the account's plan. Any usage beyond the included amounts is controlled by spending limits. ... The storage used by a repository is the total storage used by GitHub Actions artifacts and GitHub Packages.

patricus3 commented 1 month ago

and you always can auto remove old stuff, right?

samtupy commented 1 month ago

well, if all these limits only apply to private repositories, then indeed that is at least one problem solved. Now as for the other questions, say I modify just the packaging step of the workflow. Maybe I want to package the documentation in another format, include a changelog, anything. How do I do that without running the entire workflow instead of just the last job which I think would not have access to the artifacts from previous workflow builds? For example if I'm trying to update the final build package and it fails because I forgot an = character, now I must wait an extra 30 minutes just to test that one final packaging job again where I could easily find another mistake, where I'll then need to wait still another 30 minutes to see if I've fixed that one. Github's native workflow reruns feature is not an option because you to my knowledge cannot run one failed job of a workflow given a new commit. Maybe one can create a secondary workflow with a manual run trigger that accepts a run-id input that can be passed to download artifact? Then once these artifacts are uploaded, what do we do with them? I already know that by uploading the files to a custom server I can just move them somewhere and display them. How do we get these showing on nvgt.gg? Or are we planning to link people to the artifacts run page. I know that once we do release tags we can somehow put artifacts on those, but have not yet had time to learn how any of that stuff works on git and so am a bit clueless on that. Finally the last comment by the author says that this workflow is currently broken and does not upload the final package anyway but accidentally neglected to provide any error information to debug the issue. If someone wants to either spend 30 minutes per workflow run until they get this working or wants to figure out how to just rerun the failed package job with previous artifacts, I'm certainly willing to continue considering this. I think however it would be better to spend my own time in the short term fixing things with nvgt that specifically require a level of experience with the NVGT source tree that nobody else has, at least in part because we can see by these comments that I'm clearly not the most experienced git user here and thus I may not be the best candidate to fix this anyway. Of course I will get to it in time myself, but this doesn't seem like a good highest priority for me at the moment, especially since I don't understand the advantage to github artifacts over the already existing system accept that it exists as a standard. If someone wants to tackle it, it would be appreciated! I'll keep an eye out for updates, thanks for the contribution!

Mudb0y commented 1 month ago

I just pushed a potential fix to the uploading stage, it should work fine now. Once everything builds we'll find out, I'll update this pull request if everything passes. To answer your other questions, the way you would get a link to put the nightly builds on the site would be through nightly.link, that's a service that can give you static direct links to the latest artifacts without having you log-in to GitHub to download them which is an ideal solution. I'd set the links up too if I could but this has to be done by one of the maintainers once this is merged.