ManiMatter / decluttarr

Watches radarr, sonarr, lidarr, readarr and whisparr download queues and removes downloads if they become stalled or no longer needed.
GNU General Public License v3.0
99 stars 15 forks source link

Manifest unknown when pulling dev and latest #64

Closed ManiMatter closed 3 months ago

ManiMatter commented 3 months ago

Hi @craggles17 I adapted the dev.yml in the same way as you did main.yml and now the ‚manifest unknown‘ error seems back when pulling the dev image as well as the latest image.

would you mind having a look? Thank you so much for your experience here

craggles17 commented 3 months ago

Sure. Shouldn’t be too challlenging. Just to be clear the manifest unknown error was fixed by changes on main right? And it’s just dev that’s now the problem?

ManiMatter commented 3 months ago

Thank you!

I was not able to pull either anymore. I wonder whether the dev & main.yml somehow interfere with one another

ManiMatter commented 3 months ago

This should be fixed. Automated removal of untagged versions does not work with multi-arch platforms.

craggles17 commented 3 months ago

Well done! Let me know if you need help with anything else

Thanks

Craggles

ManiMatter commented 3 months ago

Maybe one question.

i like the cleanup of untagged containers since when I push a new DEV version, the former one loses the dev tag and then gets deleted (which is desired).

now with the cleanup step deactivated, i have many ‚dead‘ dev images laying around.

activating the cleanup doesnt work, because it kills also the separate images per architecture that are referenced by the latest & dev-tagged package.

here‘s what i wonder: is there a way to add a label to the architecture images, too?

example: Dev package references 4 untagged packages. Is there a way to label these packages with their respective architecture tag? Ie dev-arm64 dev-amd64?

Then the cleanup would work again.

whats written here:

https://github.com/docker/buildx?tab=readme-ov-file#building-multi-platform-images

Using multiple native nodes provide better support for more complicated cases that are not handled by QEMU and generally have better performance. You can add additional nodes to the builder instance using the --append flag.

Assuming contexts node-amd64 and node-arm64 exist in docker context ls;
$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .

Do you think we could do something with that? Or any other ideas?

Cheers

ManiMatter commented 3 months ago

Plus, the stats of the downloads seem off with the new solution. I just pulled dev and it still shows 0... any ideas?

Screenshot 2024-03-29 at 15 16 15
ManiMatter commented 3 months ago

Makes we wonder whether I should not use buildx but simply have two build steps, once for arm64 and once for arm64, tag those respective packages as "dev-arm64" and "dev-amd64" (in case of dev) or "3.0.1-arm64" and "3.0.1-amd64" for latest packages (version obv. will change per release). and then have a overall "dev" and "3.0.1"/"latest" package that points to these individual packages (same as buildx does it).

this way the sub-packages would have a tag and wouldn't be removed with the cleanup script, the download counts would still show correctly...

only problem; no clue how to do it. ideas?