Closed travier closed 1 year ago
[kinoite@fedora ~]$ rpm-ostree --version
rpm-ostree:
Version: '2022.16'
Git: 07161e7838c3dd1ac56bc5ca6b863b43b51ade54
Features:
- rust
- compose
- container
- fedora-integration
According to https://github.com/coreos/rpm-ostree/issues/4176#issuecomment-1331187083 it should have been fixed but that does not match what I tried so far.
Let's not involve --check
here because that conflates the issue with https://github.com/coreos/rpm-ostree/issues/4176
The problem domain here is really more like: "after rpm-ostree upgrade
, how do I remove the downloaded data?"
Now, I'm actually not sure we've ever had an ergonomic way to reset this even for the ostree case. Basically in the ostree case you need to ostree refs fedora:fedora/x86_64/coreos/stable a261bacc9554cd670b65385ead7d661fb6d372c827bdc6c795744b6a8e8ded3f
to explicitly reset to the booted commit, and then do rpm-ostree cleanup -pr
.
The container case is similar. If you've pulled an updated container image, and you want to un-pull it...well, OK you can but the ergonomics here are terrible.
Given an FCOS system booted via an older container image, then I upgrade:
[root@cosa-devsh ~]# rpm-ostree status -b
State: idle
BootedDeployment:
* ostree-remote-registry:fedora:quay.io/fedora/fedora-coreos:stable
Digest: sha256:b1fb7bbbeed6442b0de56df3779f539b6403a3b90523d43467c11e120bb1a368
Version: 36.20221030.3.0 (2022-11-21T15:01:57Z)
LiveCommit: 4a18a92b67ff786400ecc0b14813cce1629cc98d5737509b2f1e77af1cc18744
LiveDiff: 1 added
Unlocked: development
[root@cosa-devsh ~]# rpm-ostree upgrade
Pulling manifest: ostree-remote-image:fedora:docker://quay.io/fedora/fedora-coreos:stable
Importing: ostree-remote-image:fedora:docker://quay.io/fedora/fedora-coreos:stable (digest: sha256:522a589c4043daaec31c67453038d80c044fda49fc1aa494b8409ef3f264047c)
ostree chunk layers stored: 4 needed: 47 (725.0?MB)
Fetching ostree chunk sha256:17c6967af787 (186.0?MB)
Fetched ostree chunk sha256:17c6967af787
Fetching ostree chunk sha256:b272833f67fc (48.5?MB)
Fetched ostree chunk sha256:b272833f67fc
Fetching ostree chunk sha256:083bab60036d (38.7?MB)
Fetched ostree chunk sha256:083bab60036d
Fetching ostree chunk sha256:21869dcf6026 (76.6?MB)
Fetched ostree chunk sha256:21869dcf6026
Fetching ostree chunk sha256:326d0bb9dcf7 (21.2?MB)
...
Run "systemctl reboot" to start a reboot
Now, I have some extra container image layer refs:
[root@cosa-devsh ~]# ostree refs ostree/container | wc -l
99
[root@cosa-devsh ~]#
Undo the staged deployment:
[root@cosa-devsh ~]# rpm-ostree cleanup -p
We can remove the newly referenced image layers:
[root@cosa-devsh ~]# unshare -m /bin/sh -c 'mount -o remount,rw /sysroot && ostree container image prune-layers --repo=/ostree/repo'
Removed layers: 47
But...yes we should be pruning layers as part of rpm-ostree cleanup -p
...will fix.
Thanks!
See https://github.com/coreos/rpm-ostree/issues/4176
There is apparently no way to clean up downloaded container layers right now:
So this downloaded layers but asking rpm-ostree to clean them up does not work:
They are not re-downloaded above or below, when asking for an update: