coreos / rpm-ostree

⚛📦 Hybrid image/package system with atomic upgrades and package layering
https://coreos.github.io/rpm-ostree
Other
861 stars 195 forks source link

Previously downloaded container layers are not cleaned-up #4179

Closed travier closed 1 year ago

travier commented 1 year ago

See https://github.com/coreos/rpm-ostree/issues/4176

There is apparently no way to clean up downloaded container layers right now:

[kinoite@fedora ~]$ sudo rpm-ostree update --check
Pulling manifest: ostree-unverified-image:docker://quay.io/fedora-ostree-desktops/kinoite-nightly
Importing: ostree-unverified-image:docker://quay.io/fedora-ostree-desktops/kinoite-nightly (digest: sha256:d42d768118d08d1235f1b05f7e6181c65633dee032231d1bdee54d05e2d39d03)
ostree chunk layers stored: 60 needed: 5 (886.3 MB)
Fetching ostree chunk sha256:a497c0dd8add (40.7 MB)
Fetched ostree chunk sha256:a497c0dd8add
Fetching ostree chunk sha256:c8b62b1738af (75.7 MB)
Fetched ostree chunk sha256:c8b62b1738af
Fetching ostree chunk sha256:4ab8079bf0fa (17.9 MB)
Fetched ostree chunk sha256:4ab8079bf0fa
Fetching ostree chunk sha256:7edd6f80ae30 (744.8 MB)
Fetched ostree chunk sha256:7edd6f80ae30
Fetching ostree chunk sha256:868bb500c3c5 (7.2 MB)
Fetched ostree chunk sha256:868bb500c3c5
Note: --check and --preview may be unreliable.  See https://github.com/coreos/rpm-ostree/issues/1579
No updates available.

So this downloaded layers but asking rpm-ostree to clean them up does not work:

[kinoite@fedora ~]$ sudo rpm-ostree cleanup -brpm
Deployments unchanged.
[kinoite@fedora ~]$ sudo ostree prune --refs-only
Total objects: 145726
No unreachable objects
[kinoite@fedora ~]$ sudo rpm-ostree update --check
Pulling manifest: ostree-unverified-image:docker://quay.io/fedora-ostree-desktops/kinoite-nightly
Note: --check and --preview may be unreliable.  See https://github.com/coreos/rpm-ostree/issues/1579
No updates available.

They are not re-downloaded above or below, when asking for an update:

[kinoite@fedora ~]$ sudo rpm-ostree update --download-only
Pulling manifest: ostree-unverified-image:docker://quay.io/fedora-ostree-desktops/kinoite-nightly
Update downloaded.
[kinoite@fedora ~]$ sudo rpm-ostree update
Pulling manifest: ostree-unverified-image:docker://quay.io/fedora-ostree-desktops/kinoite-nightly
Staging deployment... done
Upgraded:
  bluez 5.65-3.fc37 -> 5.66-4.fc37
...
Run "systemctl reboot" to start a reboot
travier commented 1 year ago
[kinoite@fedora ~]$ rpm-ostree --version
rpm-ostree:
 Version: '2022.16'
 Git: 07161e7838c3dd1ac56bc5ca6b863b43b51ade54
 Features:
  - rust
  - compose
  - container
  - fedora-integration
travier commented 1 year ago

According to https://github.com/coreos/rpm-ostree/issues/4176#issuecomment-1331187083 it should have been fixed but that does not match what I tried so far.

cgwalters commented 1 year ago

Let's not involve --check here because that conflates the issue with https://github.com/coreos/rpm-ostree/issues/4176

The problem domain here is really more like: "after rpm-ostree upgrade, how do I remove the downloaded data?"

Now, I'm actually not sure we've ever had an ergonomic way to reset this even for the ostree case. Basically in the ostree case you need to ostree refs fedora:fedora/x86_64/coreos/stable a261bacc9554cd670b65385ead7d661fb6d372c827bdc6c795744b6a8e8ded3f to explicitly reset to the booted commit, and then do rpm-ostree cleanup -pr.

The container case is similar. If you've pulled an updated container image, and you want to un-pull it...well, OK you can but the ergonomics here are terrible.

Given an FCOS system booted via an older container image, then I upgrade:

[root@cosa-devsh ~]# rpm-ostree status -b
State: idle
BootedDeployment:
* ostree-remote-registry:fedora:quay.io/fedora/fedora-coreos:stable
                   Digest: sha256:b1fb7bbbeed6442b0de56df3779f539b6403a3b90523d43467c11e120bb1a368
                  Version: 36.20221030.3.0 (2022-11-21T15:01:57Z)
               LiveCommit: 4a18a92b67ff786400ecc0b14813cce1629cc98d5737509b2f1e77af1cc18744
                 LiveDiff: 1 added
                 Unlocked: development
[root@cosa-devsh ~]# rpm-ostree upgrade
Pulling manifest: ostree-remote-image:fedora:docker://quay.io/fedora/fedora-coreos:stable
Importing: ostree-remote-image:fedora:docker://quay.io/fedora/fedora-coreos:stable (digest: sha256:522a589c4043daaec31c67453038d80c044fda49fc1aa494b8409ef3f264047c)
ostree chunk layers stored: 4 needed: 47 (725.0?MB)
Fetching ostree chunk sha256:17c6967af787 (186.0?MB)
Fetched ostree chunk sha256:17c6967af787
Fetching ostree chunk sha256:b272833f67fc (48.5?MB)
Fetched ostree chunk sha256:b272833f67fc
Fetching ostree chunk sha256:083bab60036d (38.7?MB)
Fetched ostree chunk sha256:083bab60036d
Fetching ostree chunk sha256:21869dcf6026 (76.6?MB)
Fetched ostree chunk sha256:21869dcf6026
Fetching ostree chunk sha256:326d0bb9dcf7 (21.2?MB)
...
Run "systemctl reboot" to start a reboot

Now, I have some extra container image layer refs:

[root@cosa-devsh ~]# ostree refs ostree/container | wc -l
99
[root@cosa-devsh ~]#

Undo the staged deployment:

[root@cosa-devsh ~]# rpm-ostree cleanup -p

We can remove the newly referenced image layers:

[root@cosa-devsh ~]# unshare -m /bin/sh -c 'mount -o remount,rw /sysroot && ostree container image prune-layers --repo=/ostree/repo'
Removed layers: 47

But...yes we should be pruning layers as part of rpm-ostree cleanup -p...will fix.

travier commented 1 year ago

Thanks!