DavidoTek / ProtonUp-Qt

Install and manage GE-Proton, Luxtorpeda & more for Steam and Wine-GE & more for Lutris with this graphical user interface.
https://davidotek.github.io/protonup-qt
GNU General Public License v3.0
1.18k stars 39 forks source link

Proton-tkg: Pass count arg when fetching workflow runs #237

Closed sonic2kk closed 1 year ago

sonic2kk commented 1 year ago

Partially Fixes #236.

The GitHub API call to fetch workflow runs for Proton-tkg was not taking into account the count parameter given to the __fetch_workflows method. This PR fixes that by passing a per_page={count} to the API call. The API URL will now look something this like: https://api.github.com/repos/Frogging-Family/wine-tkg-git/actions/workflows/29745300/runs?per_page=100

This was discovered to be causing issues as the "Proton-tkg (Wine Master)" CI has had a significant number of failures recently, so ProtonUp-Qt was falling back to only displaying the releases. By default, this API only displays the last 30 builds. Since ProtonUp-Qt filters to only show successful builds (as there are no build artifacts for failed builds), and the last 30+ builds have failed, the tags returned from __fetch_workflows was empty.


Currently all other Proton-tkg and Wine-tkg builds are unaffected as they do not have a significant amount of failures, but this issue would impact any workflow that ProtonUp-Qt tries to download from if the last 30 builds failed for a given workflow: https://api.github.com/repos/Frogging-Family/wine-tkg-git/actions/workflows/29745300/runs

The "drawback" to adding this is that the time to load "Proton-tkg (Wine Master)" builds, since it's trying to fetch more artifacts. The solution to this would be to restrict the count passed to the method calls for the ctmod rather than anything being wrong with adding the per_page parameter to the API call. We pass count to other GitHub API calls, so it makes sense to pass it here too, it's just that now that we correctly pass it there is a slightly increased delay (for me it adds around about 2-3 seconds I think).


I opened #236 as I wasn't sure if I could debug this, I was looking into it for a while and didn't make much progress so I figured I'd report it. Of course, once I opened the bug report, I figure out how to resolve it :sweat_smile:

sonic2kk commented 1 year ago

I should also mention that once more than count builds fail (default is count=100), the issue will re-surface, but there isn't much we can do about that without increasing count. Even then, this problem would continue as if 200 builds fail the count would need bumped again, etc.

The solution in this PR just passes count to the API calls as it is given to the method calls, which is consistent with other API call behaviour. While currently it solves the issue mentioned in #236 the issue is really that so many builds of this tool are failing, so ProtonUp-Qt can only go back so far to look for passing builds.

A solution which could go hand-in-hand is some user-facing option to pass count. Some command line option might be good (like we have for passing a GitHub token), then advanced users can override the count. I'm not sure what this would be called or even if it's a good idea, it was just an idea I had earlier today :-)

In terms of the UI, I'm not sure where this would go - some kind of preferences pane for ProtonUp-Qt is not necessarily a bad idea, but it would need a bit of thought that is out of scope for this PR. And by passing count in this PR we can pave the way to make this a setting instead of leaving it at the GitHub API default of 30.

DavidoTek commented 1 year ago

would run into this problem eventually if there were enough failed builds for a workflow. Even if they were to get the last 1,000,000 builds (silly example), if the last 1,000,000 builds failed then it would have this issue.

Hmm, it would be possible to run __fetch_workflows with per_page=30&page=1 and if it returns [] because all 30 build have failed (it needs to be checked that this is the reason and not e.g. a missinig internet connection), try per_page=5&page=2 and so on.

The "drawback" to adding this is that the time to load "Proton-tkg (Wine Master)" builds, since it's trying to fetch more artifacts

I tested it and it took around 2 seconds with downloads in the background on a 100 Mbit/s connection. Not sure how many people have a very slow connection like 10 Mbit. I guess 100 is still okay, but loading many more could be problematic.

I opened #236 as I wasn't sure if I could debug this, I was looking into it for a while and didn't make much progress so I figured I'd report it. Of course, once I opened the bug report, I figure out how to resolve it :sweat_smile:

Kinda reminds me of this: https://en.wikipedia.org/wiki/Rubber_duck_debugging :smile:

I should also mention that once more than count builds fail (default is count=100), the issue will re-surface

I can't say how likely it is that there will be more than 100 failed builds, we need to wait to know. As I wrote above, I would be possible to scroll through all pages until we get count sucessful builds or we reach the end of the pages. Not sure how much effort that would be, probably one loop with the termination condition len(tags) >= count.

A solution which could go hand-in-hand is some user-facing option to pass count In terms of the UI, I'm not sure where this would go - [...]

What I could think of is a button Load more, but I cannot say how intuitive that is when the list is empty.


Thanks

sonic2kk commented 1 year ago

Hmm, it would be possible to run __fetch_workflows with per_page=30&page=1 and if it returns [] because all 30 build have failed (it needs to be checked that this is the reason and not e.g. a missinig internet connection), try per_page=5&page=2 and so on.

I checked the API calls and yeah, this should be possible to do. It would also mean going back until we hit a passed build and then getting the last 30 from that.

Since the default is 30 per page, we should be able to just go back page by page.

Not sure how much effort that would be, probably one loop with the termination condition

Yeah, in my head right now it seems like it could be straightforward to implement. I'll play around with this :-)

sonic2kk commented 1 year ago

One concern that just occurred to me with this approach of going through the pages is potentially incurring the GitHub API limit. If we're making multiple calls to go through the pages, and there are multiple pages of failed builds, this could become an issue.

DavidoTek commented 1 year ago

Yeah, in my head right now it seems like it could be straightforward to implement. I'll play around with this :-)

Okay, great!

One concern that just occurred to me with this approach of going through the pages is potentially incurring the GitHub API limit

Right, that could be an issue. We could use a combination of both approaches: 1) Increase the count from 30 to 50 (maybe beginning from page 2) 2) Do the multiple page

DavidoTek commented 1 year ago

I've implemented the multi-page approach with https://github.com/DavidoTek/ProtonUp-Qt/pull/237/commits/2cd002cb53e5171cdaa54d3a1a7f1362c2f66fdd. It will load 30 entries per page and up to 4 pages. If it doesn't find any valid entries on the 4 pages, it will return [] as normal. To ensure it will not request empty pages, I've added a variable at_least_one_failed that is set to True if it found a failed entry. If there are neither sucessful, nor failed entries, it will stop loading further pages.

One concern that just occurred to me with this approach of going through the pages is potentially incurring the GitHub API limit

I've left it 30 for now. I guess it is not that likely that many pages are fetched and the user also wants to download other compatibility tools.

sonic2kk commented 1 year ago

Thanks! I had meant to take a look at this but got a bit busy, sorry for not responding.

I think it looks good too, makes sense. For a nightly build like this too it's also not likely that a user would want more than a handful of builds imo, personally speaking I usually just go with the latest :-)

DavidoTek commented 1 year ago

Thanks.

Thanks! I had meant to take a look at this but got a bit busy, sorry for not responding.

No worries, found some time for it and was a quick and simple fix.

I think it looks good too, makes sense.

Great, I will merge it then.

For a nightly build like this too it's also not likely that a user would want more than a handful of builds imo, personally speaking I usually just go with the latest :-)

Okay, that is what I thought too.