Closed dsnodgrass45 closed 2 years ago
I am facing the same issue. I tested locally on the gcr-cleaner-cli and the default of cleaning all un-tagged images is not working. To add more to this though, when I put a filter on such as --tag-filter-any "^remove.+$"
the cleaner can successfully remove all of those images with the remove tag.
Hi @dsnodgrass45 - can you please share a screenshot of your Artifact Registry layout? If you could switch to using the CLI with debug mode, we should see more output.
@sethvargo I have added images of the repo/image space. Please note that the logs were changed to not show the actual project/repo names. The images show the real ones but wanted you to know the actual logs match the real names.
What is the output of using the CLI:
gcr-cleaner -repo us-central1-docker.pkg.dev/snod-prod/automate-pipeline -dry-run -keep 1 -recursive
> docker run -it us-docker.pkg.dev/gcr-cleaner/gcr-cleaner/gcr-cleaner-cli
Unable to find image 'us-docker.pkg.dev/gcr-cleaner/gcr-cleaner/gcr-cleaner-cli:latest' locally
latest: Pulling from gcr-cleaner/gcr-cleaner/gcr-cleaner-cli
01a0d6d9dcc9: Pull complete
53ebec517518: Pull complete
Digest: sha256:cb94d3cd9c3c52b9db008805079da33e3c5e02740de230879a783e09754b7093
Status: Downloaded newer image for us-docker.pkg.dev/gcr-cleaner/gcr-cleaner/gcr-cleaner-cli:latest
missing -repo
Hi @dsnodgrass45 could you please run the command:
gcr-cleaner -repo us-central1-docker.pkg.dev/snod-prod/automate-pipeline -dry-run -keep 1 -recursive
and paste the result?
@sethvargo I'm having the above issue of "missing -repo" when trying to run gcr-cleaner-cli locally.
If you're trying to use it via Docker:
docker run -it us-docker.pkg.dev/gcr-cleaner/gcr-cleaner/gcr-cleaner-cli -- -repo us-central1-docker.pkg.dev/snod-prod/automate-pipeline -dry-run -keep 1 -recursive
Here is my output from running it locally. Sorry for the delay.
> ./gcr-cleaner-cli -repo us-central1-docker.pkg.dev/snod-prod/automate-pipeline -dry-run -keep 1 -recursive
WARNING: Running in dry-run mode - nothing will actually be cleaned!
Deleting refs older than 2022-04-03T21:21:17Z on 2 repo(s)...
us-central1-docker.pkg.dev/snod-prod/automate-pipeline
✗ no refs were deleted
us-central1-docker.pkg.dev/snod-prod/automate-pipeline/http-doom
✗ no refs were deleted
Hi @dsnodgrass45
Thank you for the reply. Does the currently authenticated user have permission to list on those repos? What is the output of:
gcloud artifacts packages list --location us-central1 --repository automate-pipeline --project snod-prod
The user does have permissions for list.
> gcloud artifacts packages list --location us-central1 --repository automate-pipeline --project snod-prod
Listing items under project snod-prod, location us-central1, repository automate-pipeline.
PACKAGE CREATE_TIME UPDATE_TIME
http-doom 2022-03-28T14:34:13 2022-04-02T18:00:02
@sethvargo the gcr-cleaner service account has roles/artifactretistry.repoAdmin which includes both .writer and .reader (allows listing) of the repository.
Hi @dsnodgrass45 - thank you for your patience here. I just pushed up v0.7.2 which includes a lot more logging information that should point us towards any bugs. Could you please download v0.7.2 and run:
GCRCLEANER_LOG=debug ./gcr-cleaner-cli -repo us-central1-docker.pkg.dev/snod-prod/automate-pipeline -dry-run -keep 1 -recursive
@sethvargo got the new version going locally. Here is the desired output.
> GCRCLEANER_LOG=debug ./gcr-cleaner-cli -repo us-central1-docker.pkg.dev/snod-prod/automate-pipeline -dry-run -keep 1 -recursive
{"message":"using default token resolution for authentication","severity":"DEBUG","time":"2022-04-06T21:04:17Z"}
{"message":"gathering child repositories recursively","severity":"DEBUG","time":"2022-04-06T21:04:17Z"}
WARNING: Running in dry-run mode - nothing will actually be cleaned!
Deleting refs older than 2022-04-06T21:04:17Z on 2 repo(s)...
us-central1-docker.pkg.dev/snod-prod/automate-pipeline
{"message":"computed repo","repo":"us-central1-docker.pkg.dev/snod-prod/automate-pipeline","severity":"DEBUG","time":"2022-04-06T21:04:18Z"}
✗ no refs were deleted
us-central1-docker.pkg.dev/snod-prod/automate-pipeline/http-doom
{"message":"computed repo","repo":"us-central1-docker.pkg.dev/snod-prod/automate-pipeline/http-doom","severity":"DEBUG","time":"2022-04-06T21:04:19Z"}
{"digest":"sha256:cc34637cef290a07e0944bc272c1ce450ced2075d7903a438298af012bfd27a1","message":"processing manifest","repo":"us-central1-docker.pkg.dev/snod-prod/automate-pipeline/http-doom","severity":"DEBUG","tags":[],"time":"2022-04-06T21:04:19Z","uploaded":"2022-03-28T14:34:13-05:00"}
{"digest":"sha256:cc34637cef290a07e0944bc272c1ce450ced2075d7903a438298af012bfd27a1","message":"should delete","reason":"no tags","repo":"us-central1-docker.pkg.dev/snod-prod/automate-pipeline/http-doom","severity":"DEBUG","time":"2022-04-06T21:04:19Z"}
{"digest":"sha256:cc34637cef290a07e0944bc272c1ce450ced2075d7903a438298af012bfd27a1","keep":1,"keep_count":0,"message":"skipping deletion because of keep count","repo":"us-central1-docker.pkg.dev/snod-prod/automate-pipeline/http-doom","severity":"DEBUG","time":"2022-04-06T21:04:19Z"}
{"digest":"sha256:1d620b118edf50a371a78f73af2fa23b38f592f73b9324f7bfda1ba3c3d7ec15","message":"processing manifest","repo":"us-central1-docker.pkg.dev/snod-prod/automate-pipeline/http-doom","severity":"DEBUG","tags":["latest"],"time":"2022-04-06T21:04:19Z","uploaded":"2022-03-30T12:30:07-05:00"}
{"digest":"sha256:1d620b118edf50a371a78f73af2fa23b38f592f73b9324f7bfda1ba3c3d7ec15","message":"should not delete","reason":"no filter matches","repo":"us-central1-docker.pkg.dev/snod-prod/automate-pipeline/http-doom","severity":"DEBUG","time":"2022-04-06T21:04:19Z"}
✗ no refs were deleted
Hi @dsnodgrass45
Thank you for the log output. Does the b9b44...
image still exist from your screenshot above?
Here's what I'm seeing:
cc3463...
is processed first and is marked for deletion. However, since keep=1
, it's kept.1d620b...
is processed next and is not marked for deletion because it has the latest
tag.Therefore, no references are deleted.
@sethvargo so running it locally is cleaning up the artifact registry repo as designed. I'm redeploying v0.7.2 and will try with cloud scheduler/run again and see what the results are.
@sethvargo my initial testing with v0.7.3 is working now! Greatly appreciate your work on this and your patience to get this working.
Thanks!
I have setup gcr-cleaner in Cloud Run using us-docker.pkg.dev/gcr-cleaner/gcr-cleaner/gcr-cleaner. The Cloud Scheduler job is passing the following payload:
When executed I see the following info/debug logs:
I have three versions in us-central1-docker.pkg.dev/my-project/my-repo/my-image. One is tagged latest and the other two are untagged. Even with the argument to keep:1 and the logs showing 200 OK none of the images are being pruned. Any help would be appreciated.