Closed ovesh closed 11 months ago
Indeed, all recent troubles with docker pulls and builds are avoid when using 432.
Problems like this one:
docker pull <path/to>/chouette-core:latest
latest: Pulling from <path/to>/chouette-core
9e3ea8720c6d: Already exists
201ffa301017: Already exists
1aaf5502212f: Already exists
0a95bd2da24b: Already exists
4ca2c5456b8d: Already exists
6ffb69b0b02f: Already exists
d8bb3447059b: Already exists
f20b2491ef15: Already exists
f55eb902160d: Already exists
58fc5b4f635d: Already exists
c016d7975c80: Pulling fs layer
9c568ce5ad2b: Pulling fs layer
19902cf9313a: Pulling fs layer
c766a019264d: Waiting
14af2e7ce0f2: Waiting
01d3d448f92c: Waiting
8c04a3d892b6: Waiting
e3da405b40fb: Waiting
error pulling image configuration: download failed after attempts=1: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDen
ied</Code><Message>Access denied.</Message><Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object. Permission 'storage.objects.get' denied on resource (or it may not exist).</
Details></Error>"
All our builds are pinned on google/cloud-sdk:432.0.0-slim
for the moment.
Is it still an issue with the latest image? How does your setup look like? I can't reproduce the issue locally with the following setup. Pushing and pulling seems to be possible.
user@host:~$ sudo docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock eu.gcr.io/google.com/cloudsdktool/google-cloud-cli:latest bash
root@container:/# docker version
Client:
Version: 24.0.2
API version: 1.39 (downgraded from 1.43)
Go version: go1.20.4
Git commit: cb74dfc
Built: Thu May 25 21:50:49 2023
OS/Arch: linux/amd64
Context: default
Server:
Engine:
Version: 18.09.1
API version: 1.39 (minimum version 1.12)
Go version: go1.11.6
Git commit: 4c52b90
Built: Sun Feb 21 17:18:35 2021
OS/Arch: linux/amd64
Experimental: false
root@container:/# gcloud version
Google Cloud SDK 433.0.1
alpha 2023.06.01
app-engine-go 1.9.75
app-engine-java 2.0.14
app-engine-python 1.9.104
app-engine-python-extras 1.9.100
beta 2023.06.01
bigtable
bq 2.0.93
bundled-python3-unix 3.9.16
cbt 0.15.0
cloud-datastore-emulator 2.3.0
cloud-firestore-emulator 1.17.4
cloud-spanner-emulator 1.5.4
core 2023.06.01
gcloud-crc32c 1.0.0
gke-gcloud-auth-plugin 0.5.3
gsutil 5.24
kpt 1.0.0-beta.34
local-extract 1.5.8
pubsub-emulator 0.8.2
root@container:/# gcloud auth configure-docker gcr.io,eu.gcr.io
Adding credentials for: gcr.io,eu.gcr.io
After update, the following will be written to your Docker config file located at [/root/.docker/config.json]:
{
"credHelpers": {
"gcr.io": "gcloud",
"eu.gcr.io": "gcloud"
}
}
Do you want to continue (Y/n)? y
Docker configuration file updated.
root@container:/# docker pull eu.gcr.io/project_name/debian:10
10: Pulling from project_name/debian
c722db24a050: Pull complete
Digest: sha256:a067a9e8b39d5f19659b3bc9fd4348f6319afabd0d6ba1fe3b43df108926ea92
Status: Downloaded newer image for eu.gcr.io/project_name/debian:10
eu.gcr.io/project_name/debian:10
@ovesh Is it still possible for you to reproduce the docker authentication issue with the latest google-cloud-sdk
/google-cloud-cli
docker image? And does running gcloud auth configure-docker gcr.io
change anything about this issue?
Sorry, I have been busy with other things. I'll try to get around to it this week.
Closing for inactivity.
We discovered this because our build wasn't pinning google/cloud-sdk.
I suspect it's because of the new version of docker being pulled in, but what I observed is that docker ignores
"credHelpers": { "gcr.io": "gcloud"}
in~/.docker/config.json
. The executable/usr/bin/docker-credential-gcloud
is there but strace indicates that docker isn't trying to open it.Pinning to 432 fixes the issue.