Quiq / registry-ui

Web UI for Docker Registry
Apache License 2.0
382 stars 96 forks source link

Purge Tags Not Working on Outdated Operating System #82

Closed Drophoff closed 20 hours ago

Drophoff commented 4 days ago

I downloaded the tag 0.10.3 in combination with go 1.23.1 on Debian (bullseye, amd64, GLIBC 2.31). The binary has been build with CGO_ENABLED=1 to use sqlite3.

Szenario 1: Working

When I execute the following statement within a docker container it runs fine without any failure: docker-registry-ui -config-file /opt/app/etc/config.yml -purge-exclude-repos none

INFO[2024-09-14T21:49:47+02:00] [RefreshCatalog] Started reading catalog... logger=registry.client ⇨ http server started on [::]:80 INFO[2024-09-14T21:49:47+02:00] [RefreshCatalog] Job complete (46.606979ms): 6 repos found logger=registry.client INFO[2024-09-14T21:49:47+02:00] [CountTags] Started counting tags... logger=registry.client INFO[2024-09-14T21:49:47+02:00] [CountTags] Job complete (45.123387ms). logger=registry.client

Szenario 2: Working

No failure when I execute: docker-registry-ui -config-file /opt/app/etc/config.yml -purge-include-repos none

INFO[2024-09-14T21:54:27+02:00] [RefreshCatalog] Started reading catalog... logger=registry.client ⇨ http server started on [::]:80 INFO[2024-09-14T21:54:27+02:00] [RefreshCatalog] Job complete (37.15673ms): 6 repos found logger=registry.client INFO[2024-09-14T21:54:27+02:00] [CountTags] Started counting tags... logger=registry.client INFO[2024-09-14T21:54:27+02:00] [CountTags] Job complete (49.598244ms). logger=registry.client

Szenario 3: Working

The great application works very well with registry:2. I can pull and push images to the registry and I can see the corresponding events and images within the ui. I therefore assume that the application is functional.

Szenario 4: Not Working

But when I execute: docker-registry-ui -config-file /opt/app/etc/config.yml -purge-tags

I got the following failure:

INFO[2024-09-14T21:55:55+02:00] [RefreshCatalog] Started reading catalog...   logger=registry.client
INFO[2024-09-14T21:55:55+02:00] [RefreshCatalog] Job complete (36.237769ms): 6 repos found  logger=registry.client
INFO[2024-09-14T21:55:55+02:00] Working on repositories: [book/client book/db book/parent/client book/parent/db book/parent/server book/server]  logger=registry.tasks.PurgeOldTags
INFO[2024-09-14T21:55:55+02:00] [book/client] scanning 1 tags...              logger=registry.tasks.PurgeOldTags
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x9530ff]

goroutine 1 [running]:
github.com/quiq/registry-ui/registry.(*Client).GetImageCreated(0xc00012c4e0, {0xc0000d4200, 0x1a})
        /opt/registry-ui-0.10.3/registry/client.go:275 +0x2ff
github.com/quiq/registry-ui/registry.PurgeOldTags(0xc00012c4e0, 0x0, {0x0, 0x0}, {0x0, 0x0})
        /opt/registry-ui-0.10.3/registry/tasks.go:105 +0x1f86
main.main()
        /opt/registry-ui-0.10.3/main.go:62 +0xdd5

I'm using the following config.yml within docker with a macvlan based custom network:

listen_addr: 0.0.0.0:80
uri_base_path: /
performance:
  catalog_page_size: 100
  catalog_refresh_interval: 10
  tags_count_refresh_interval: 60
registry:
  hostname: docker-repo.domain.dot
  insecure: false
  username: user
  password: secure
  password_file: ''
  auth_with_keychain: false
access_control:
  anyone_can_view_events: true
  anyone_can_delete_tags: true
  admins: []
event_listener:
  bearer_token: Secure
  retention_days: 7
  database_driver: sqlite3
  database_location: data/registry_events.db
  deletion_enabled: true
purge_tags:
  keep_days: 90
  keep_count: 3
  keep_regexp: ''
  keep_from_file: ''
debug:
  templates: false

It makes no difference whether the application is started or not. The error pattern is identical in both scenarios.

It should not be due to the limited memory.

free -hb
               total        used        free      shared  buff/cache   available
Mem:            62Gi        28Gi        23Gi       128Mi        11Gi        34Gi
Swap:             0B          0B          0B

I'm using a pretty old debian and docker version. Feel free to close this issue in case it is not reproduceable with actual software.

roman-vynar commented 2 days ago

It is failing on the image type. What kind of images do you use OCI/Docker etc.? Can you see the information about them in the web app successfully?

Try to narrow down to a single repo, e.g.:

-purge-include-repos book/client

Args -purge-include-repos and -purge-exclude-repos do not make sense w/o -purge-tags. Can also add -dry-run.

Drophoff commented 1 day ago

Thank you very much for the quick reply.

  1. Image Type: I use OCI images as well as Docker images.

  2. Wep App: As described in scenario 3, I can see the images as well as the events in the UI. The UI is therefore functional.

  3. Narrow Repo: The restriction to a repository is not necessary, as there is only one repository “book” with subfolders like "client", "server", "db" or "parent" in the Docker registry. These repos are mixed with OCI and Docker images. For example "book/client" is a Docker Image where "book/parent/client" is from type OCI. And "book/client" extends the image "book/parent/client".

  4. Missing Purge Tags: Unfortunately, I don't understand this point. As written in scenario 1 and 2, the purge-exclude-repos and purge-include-repos calls are working. In the examples I have used none as purge-tags. And this calls are working as expected.

The issue revolves solely around the purge-tags functionality (see scenario 4).

Which, according to the documentation, should be callable without further arguments: 10 3 * * * root docker exec -t registry-ui /opt/registry-ui -purge-tags.

I would therefore assume that scenario 4 purge-tags should actually work. Which unfortunately is not the case.

roman-vynar commented 1 day ago

-purge-include-repos or -purge-exclude-repos solely do not do anything w/o -purge-tags. In other words, they are optional args to supplement -purge-tags by limiting what repos to work (purge) on.

Looks like you have an Image Index (Index Manifest) with no sub-images or with broken references to non-existing sub-images or Index cannot be resolved to a single image because of arch/platform mismatch local vs sub-image.

I have added a code to print error in place reported in the stack trace, you will be able to see what the image ref is and check it in UI for more details. Try this docker image quiq/registry-ui:debug with the extra code. You should see something like Cannot resolve Image/ImageIndex to descriptor for image reference XXX: YYY

Drophoff commented 20 hours ago

Thank you very much for pointing out the faulty images and the use of the program arguments.

I have just run purge-tags and no longer received an error message. I am currently assuming that the faulty images were cleaned up by the nightly CI run. So unfortunately I could not try the debug image.

I personally like the idea of displaying a message very much and would always prefer it to a segmentation violation.

I apologize for the inconvenience.