sailfishos-chum / sailfishos-chum-gui

GUI application for utilising the SailfishOS:Chum community repository
https://openrepos.net/content/olf/sailfishoschum-gui-installer
MIT License
13 stars 17 forks source link

WIP: Implement a method of retrieving the package update time #155

Open piggz opened 1 year ago

piggz commented 1 year ago

The package update time is based on the last update of the source pacakge, not the binary package. To do this, I have a server application, currently hosted on piggz.co.uk:8081 which uses the OBS API to build a JSON model, relating projects, packages, repositories and binaries.

To find the package mtime, you iterate the model, searching for the project, then find the repository matching the current device, then look for the package in the binaries. WHen the pacakge is found from the binary, get the mtime from the source package list.

Currently I dont know how to get the repo name from SSU, therfore the arch is hardcoded as aarch64.

Included some simple update to the QML to show the update time, nothing more complicated such as a section of "recently udpated apps"

Server code will be posted later for comment, it could certainly be improved (currently a single large cache), and Im happy for others to take this on if the idea is sound.

rinigus commented 1 year ago

I would suggest to make mtime as a property of ChumPackage. This would make sense as it is a package property.

Next, I suggest to update the list of packages in Chum or make separate singleton for it. ChumPackagesModel is not a good choice as it is used for different package listing sorting and presentation. That's the one we will expand to make Recent list as well. Look how chumModel is created in PackagesListPage. Would be bad to update the list every time the user moves to a new page ...

When moved JSON list handling to Chum, we can drop ChumPackagesModel.busy and use Chum.busy as it is done already.

In terms of http://piggz.co.uk:8081/ - would be way better to load data only for active repository. Right now it is rather large download (5MB) and took 43 seconds over here. But that could be done later as well. I guess some REST API or similar is needed. Would be good to also have a timestamp file/facility showing when the latest list was generated to avoid loading it if it is not needed. ... Hmm, looking at this file - we probably can just keep mtime section in JSON. No need to have all those packages listed in "repositories".

Currently I dont know how to get the repo name from SSU, therfore the arch is hardcoded as aarch64.

Don't follow, sorry. Note that older SFOS releases don't have aarch64

BTW, I accidently looked into https://repology.org/ . While they have Suse support (and probably via OBS), they don't list modified times in their API, unfortunately. Otherwise we could have used that for a backend...

Olf0 commented 1 year ago

Currently I dont know how to get the repo name from SSU, therfore the arch is hardcoded as aarch64.

ssu lr | fgrep sailfishos-chum Should be easy to determine the corresponding call to libssu at https://github.com/sailfishos/ssu

My thoughts:

piggz commented 1 year ago

Currently I dont know how to get the repo name from SSU, therfore the arch is hardcoded as aarch64.

ssu lr | fgrep sailfishos-chum Should be easy to determine the corresponding call to libssu at https://github.com/sailfishos/ssu

My thoughts:

  • IMO an external cache is not a good idea: Someone has to run and maintain it. If something happens (illness, car accident etc.) to that someone (currently you @piggz), the whole thing collapses.

I do not propose to keep the server on piggz.co.uk, that is just for WIP/demo, it would obviously be better somewhere else, and am open to ideas for this.....

IMO there is enough compute power and bandwidth locally available to perform all data extraction from the SailfishOS-OBS on the device.

There are 2 problems here, 1, OBS API requires credentials, atm, the server app is using my credentials to build the cache.
And 2, it takes close to an hour to actually build the cache with all the calls, so you probably dont want to do this on device :) (yes, it would be cut down by not having to check all version/archs, but its still a slow process)

  • I do understand the issue, which is minor from my point of view. But the efforts to address it seem to be huge. As I will not contribute code for this, I must leave it up to you to judge if these efforts are inappropriate / out of proportion. What exacerbates this aspect is …

  • … Jolla's trajectory and their handling of SailfishOS: After some consideration, I am sure SailfishOS is deliberately not maintained sustainably and this is not going to change unless a new, big corporate customer pays significantly for it. I.e. one which replaces Rostelecom's millions put in into SFOS. Mind that in the past 7 years Rostelecom was the only significant customer, so the likeliness for that to happen is close to zero. In short: SailfishOS is left dying slowly.

rinigus commented 1 year ago

I think having some kind of extra service is the best way for now. It could be on @piggz domain or some cloud storage. Probably we would need some PC running check once in a while and updating that JSON. As for dependence on some person, yes, that's what comes with SFOS in general.

Re general concerns, I'll reply in the other thread.

piggz commented 1 year ago

@rinigus Ive implemented all your comments. The cache is entirely optional, if cache loading fails, then you are in no worse off situation. I still need to implement the architecture detection to create the correct repository name for searching.

Next step will be to post the server code and get comments on that.

piggz commented 1 year ago

Server app is here https://github.com/sailfishos-open/chum-package-cache

Obvious improvements would be to generate separate caches for different releases, using GET params to get the required version, therefore making the download smaller, though at the moment it is nice and simple, and doesnt require many NPM modules, relying on only GET and POST

rinigus commented 1 year ago

chum-package-cache has just README, no code :)

piggz commented 1 year ago

it does now :)

Olf0 commented 1 year ago

Off topic: Honestly (and frankly, as usual), IMO fixing the bugs which severely affect basic functionality of the SailfishOS:Chum GUI app should be prioritised higher than new features, specifically technically complex ones, which hence are complicated to implement.

Specifically I consider the bugs #184 and #165 / #103 as pretty bad, because both describe a specific core-function of the GUI app as fundamentally broken. Plus bug #186 appears to be a very "how hanging fruit" (should be trivial to address for someone QML savvy).

Furthermore feature request #86, plus likely also #105 and #126 appear to be easy to implement for a much larger gain that displaying the correct package update time.

But as usual, as we all are doing this in our spare time, for a good part it is fine to prioritise things to their fun factor; to a certain extent, because IMO developers / maintainers should (but not "must") assume a little responsibility for their software.

rinigus commented 1 year ago

Yes, spring cleaning should be done. And I agree, #184 is rather annoying one. Would prioritize that. Will try to check others as well.

Olf0 commented 1 year ago

My thoughts:

  • IMO an external cache is not a good idea: Someone has to run and maintain it. If something happens (illness, car accident etc.) to that someone (currently you @piggz), the whole thing collapses.

I do not propose to keep the server on piggz.co.uk, that is just for WIP/demo, it would obviously be better somewhere else, and am open to ideas for this.....

Oh, neither the DNS name nor you as a person was something I intended to address. What I meant was:

AFAIU this does not need to be a permanently running service: Its data gathering is rather cyclical and its output simply must be accessible by clients, i.e. downloadable via HTTPS. Then a GitHub app or even a GH action may be sufficient: They can be triggered time based (basically alike a cron job), run on a preconfigured VM image (Ubuntu, plus MacOS and Windows) in user space (but sudo is working), install packages locally from the regular package repos (sudo apt get foo), use anything in accessible GH repos via git clone / git checkout (I suggest to create a separate repo for this), can push stuff to the web-site <name>.github.io or any writeable repo etc. In short: Only permanently running services are forbidden.

This would allow for using the regular access privileges at GitHub in order to avoid a "single person" single point of failure and the infrastructure is guaranteed to be kept nearly 100% available by others (i.e., GH staff).