ben-grande / qusal

Salt Formulas for Qubes OS.
14 stars 6 forks source link

sys-cacher should handle addition of new repositories definitions automatically with inotify #44

Closed tlaurion closed 1 week ago

tlaurion commented 3 months ago

Commitment

I confirm that I have read the following resources:

Current problem (if any)

When deploying sys-cacher, dom0 checking mechanisms are nullified as of now. Also some repositories lists are currently not taken into consideration (extrepos not touched per sys-cacher deployment as of now)

Normal, since qubes are responsible to check for updates, and can't, since URLs of parent templates are modified to talk only over apt-cacher. Checks called from dom0 to qubes silently fail and no parent templates are reported as having updates.

Proposed solution

There is multiple possible solutions here, all of which have drawbacks and upsides.

This is a case study advocating for #31 :)

The value to a user, and who that user might be

Be able to deploy new repositories/software for testing over qubes, deploy repositories in templates without needing to rewrite URLs manually, have template update notifications as normally expected from dom0 widget

tlaurion commented 3 months ago

@ben-grande thoughts? By detecting if we are template or qube from qubesdb and even further checks (tags?) URLs of repos could all (not just selected ones as actual defined paths) could be rewritten at runtime and permit qube update checks as implemented per dom0 update checks and provide template upgrade hints as expected otherwise failing today (discussed under shaker). Proposed solution under#31 with background discussions over qubes forum as pointed.

ben-grande commented 2 months ago

Rewriting URLs to be usable on Non-Templates for the update check mechanism is possible. One edge case is if using sys-cacher as the netvm for a qube, then it won't work. The advantage of using apt-cacher-ng in the netvm is useful for when the qube doesn't have Qrexec tools.

So, I either favor update check or using apt-cacher-ng in the netvm for qubes without Qrexec tools. I am pending towards update check because it is expected by a normal Qubes OS usage while the second is more niche.

Edit: In any way, this still doesn't require inotify, I can do this in rc.local for modifications during boot, disregarding repositories being added later. #31 will have the decision about inotify.

tlaurion commented 2 months ago

Edit: In any way, this still doesn't require inotify, I can do this in rc.local for modifications during boot, disregarding repositories being added later. #31 will have the decision about inotify.

But the issue IS applying at runtime, to user can, and should, be able to deploy whatever repo they see fit, whatever the complexity abstracted per sys-cacher.

I recently was able to finally deploy sys-cacher and use it per last fix with policies getting in the way.

The poi there is that users should receive dom0 update notification, and should continue to be able to install aps under qubes for the time being of their usage. That is qubesos. By switching to sys-cacher and for a user to not having to be aware of what is happening behind the scenes, the only way for this to work IS to have inotifywait running in the background, applying /reverting changes so that qubesos continues to report updates from dom0 widget.

Reminder. Qubes are responsible to check for updates and report to dom0 that their templates need updates. Without further changes, that mechanism doesn't work anymore. Right, it can be applied on past deployed repositories in qube at runtime to revert url rewriting if ALL repository urls. If one is missed, updates won't be reported back. Applying reponlist changes in template and calling update +install won't work either until next run, and as of now won't work at all because the scope is limited to known repositories.

Solution proposed would fix both and extend to whatever the user tries, amnesiacly in qube, and automatically changed under templates so that next command works as intended without any fuss.

To be honest that's the only solution I came across that fixes all use cases. Not sure about sys-cacher as netvm though, never heard of that corner case. Not sure I understand it tbh.

tlaurion commented 2 months ago

@ben-grande nice to see #31 fixed.

Right, it can be applied on past deployed repositories in qube at runtime to revert url rewriting if ALL repository urls. If one is missed, updates won't be reported back.

So this still applies on current state, if a user having networking in a qube attempts to deploy a repo to test changes locally (I do it all the time). Great, now I can install iftop in my disp sys-net VM, but I cannot deploy a new repo and signature under other qubes without knowing I have to adapt the repo urls for them to be used, nor will the updates being reported to dom0 for that template for the time being I figure that out.

My point is again that those things should be transparent for sys-cacher users. I agree this needs consideration on how to do this properly but using sys-cacher should not break qubesos expected contracts. Noted that README.md was updated to state to the user that firewall rules are now bypassed, but I am not convinced it needs to be that way, and my proposition here was kinda the other way around.

With inotifywait :

Unless dom0 comes aware of the repo urls and set IPs for sys-cacher's firewall, I believe having appvms bypassing firewall rules is also hiding sys-cacher implementation details, which might or not be desired. Not sure.

We can discuss this off channel if you want.

ben-grande commented 2 months ago

@ben-grande nice to see #31 fixed.

Right, it can be applied on past deployed repositories in qube at runtime to revert url rewriting if ALL repository urls. If one is missed, updates won't be reported back.

So this still applies on current state, if a user having networking in a qube attempts to deploy a repo to test changes locally (I do it all the time). Great, now I can install iftop in my disp sys-net VM, but I cannot deploy a new repo and signature under other qubes without knowing I have to adapt the repo urls for them to be used, nor will the updates being reported to dom0 for that template for the time being I figure that out.

My point is again that those things should be transparent for sys-cacher users. I agree this needs consideration on how to do this properly but using sys-cacher should not break qubesos expected contracts. Noted that README.md was updated to state to the user that firewall rules are now bypassed, but I am not convinced it needs to be that way, and my proposition here was kinda the other way around.

Note that firewall rule bypassing only occurs if you want to cache packages of non-templates, as these qubes normally have a netvm with the firewall rules of they qube itself. It doesn't mean it is unsafe, but people should be aware that the cached qube won't respect the Qubes Firewall because the cacher proxy allows for some kind of networking.

With inotifywait :

  • if in appvms, rewrite urls to not use cacher unless opt-in. This means that on launch of the script and for the lifetime of the qube, urls would be standard and not apt-cacher-ng compliant to the cost of only urls being in templates (and kept there by template, appvms cannot persistently write there anyway). Update checks would consume additional bandwidth, bypassing apt-cacher-ng but all templates would use it unless a race condition happens and some templates try to get it exactly at the same time(templates updates are parallellized now, but corner case we don't care). Otherwise there would need to have ways to puch holes from DNS resolution to the firewall for sys-cacher only being able to talk to defined repos...... Or I miss something.
  • if in templates, make sure all urls are apt-cacher-ng compliant. Templates sharing updates will use cached repos and packages, always, and new repo definitions could be used instantly in templates, without having to reboot for script to be applied on boot only. When I add a repo, I install the needed software at the same time. I should not have to run a script manually there (and know its existence beforehand) to call apt or dnf or whatnot. sys-cacher as a service should take care of making this totally transparent, with the problem staying of how to deploy public key without having to export the proxy, which is another problem unless extrepo is used (and maintained) which is not the case right now.

Unless dom0 comes aware of the repo urls and set IPs for sys-cacher's firewall, I believe having appvms bypassing firewall rules is also hiding sys-cacher implementation details, which might or not be desired. Not sure.

We can discuss this off channel if you want.

I think your are mixing Qubes Firewall with Inotify issue, please open a new issue for the firewall if you are unsure, but my explanation above remains, it only affects non-templates that have cacher enabled.

I can introduce inotify under an optional salt state. Not high in my priority right now because the workaround is sudo apt-cacher-ng-repo which takes 30-50ms to run (hot). After I do some gardening of the bugs, I can think of implementing this feature. When implemented, I will probably pass the filename as an argument in case it executes faster, else no argument will be passed and will try all files.

If you think you can create a Salt state of script to monitor, please open a PR or do fenced code blocks here in this issue for discussion.

ben-grande commented 1 week ago

If you think you can create a Salt state of script to monitor, please open a PR or do fenced code blocks here in this issue for discussion.

There has been no progress on this issue and I didn't find a way for inotify to not interfere while the user is modifying the file manually, it always leads to the rewriting the file and causing confusion in the editor as the file needs to be reloaded and changes might be lost. If you find a solution, then this issue may be reopened and an optional installation can be considered.