openshift / oc-mirror

Lifecycle manager for internet-disconnected OpenShift environments
Apache License 2.0
89 stars 81 forks source link

Unable to mirror images using pull secret with path-based keys #890

Open tpapaioa opened 3 months ago

tpapaioa commented 3 months ago

Version

# oc-mirror version --output=yaml
clientVersion:
  buildDate: "2024-06-20T10:52:05Z"
  compiler: gc
  gitCommit: 7c0889f4bd343ccaaba5f33b7b861db29b1e5e49
  gitTreeState: clean
  gitVersion: 4.16.0-202406200537.p0.g7c0889f.assembly.stream.el9-7c0889f
  goVersion: go1.21.9 (Red Hat 1.21.9-1.module+el8.10.0+21671+b35c3b78) X:strictfipsruntime
  major: ""
  minor: ""
  platform: linux/amd64

What happened?

oc-mirror doesn't use path-based credentials in config.json as expected.

We are using https://github.com/openshift/appliance to create an appliance installation image, with a pull secret specified for a private image on quay.io:

$ cat osi/appliance_assets/appliance-config.yaml
apiVersion: v1beta1
kind: ApplianceConfig
ocpRelease:
  version: "4.16.1"
  channel: stable
  cpuArchitecture: x86_64
pullSecret: '{"auths":{"quay.io/my_org/my_image":{"auth":"[SNIP]","email":"[SNIP]"},"quay.io":{"auth":"[SNIP]","email":"[SNIP]"}}}'
sshKey: '[SNIP]'
userCorePass: user-core-pass
stopLocalRegistry: false
additionalImages:
  - name: quay.io/my_org/my_image:latest

With the same pullSecret value in ~/.docker/config.json, I can pull quay.io/my_org/my_image:latest with podman, for example. But oc-mirror fails when trying to mirror the image in the installation image build:

$ export APPLIANCE_IMAGE="quay.io/edge-infrastructure/openshift-appliance"
$ export APPLIANCE_ASSETS="$HOME/osi/appliance_assets"

$ cat osi/appliance_assets/appliance-config.yaml
apiVersion: v1beta1
kind: ApplianceConfig
ocpRelease:
  version: "4.16.1"
  channel: stable
  cpuArchitecture: x86_64
pullSecret: '{"auths":{"quay.io/my_org/my_image":{"auth":"[SNIP]","email":"[SNIP]"},"quay.io":{"auth":"[SNIP]","email":"[SNIP]"}}}'
sshKey: '[SNIP]'
userCorePass: user-core-pass
stopLocalRegistry: false
additionalImages:
  - name: quay.io/my_org/my_image:latest

$ sudo podman run --rm -it --pull newer --privileged --net=host -v $APPLIANCE_ASSETS:/assets:Z $APPLIANCE_IMAGE build --log-level debug
DEBUG Fetching Env Config...                       
DEBUG Loading Env Config...                        
[...]
INFO Successfully pulled OpenShift 4.16.1 release images required for bootstrap 
| Pulling OpenShift 4.16.1 release images required for installation...DEBUG Stopping registry container                  
DEBUG Running cmd: podman rm registry -f           
DEBUG Running cmd: podman run --net=host --privileged -d --name registry -v /assets/temp/data/oc-mirror/install:/var/lib/registry --restart=always -e REGISTRY_HTTP_ADDR=0.0.0.0:5005 docker.io/library/registry:2 
DEBUG image registry availability check attempts 1/3 
- Pulling OpenShift 4.16.1 release images required for installation...DEBUG image registry availability check attempts 2/3 
DEBUG Rendering scripts/mirror/imageset.yaml.template 
DEBUG Fetching image from OCP release (oc mirror --config=/assets/temp/scripts/mirror/imageset.yaml docker://127.0.0.1:5005 --dir assets/temp/oc-mirror1261672872 --dest-use-http) 
DEBUG Using pull secret from: /root/.docker/config.json 
DEBUG Running cmd: oc mirror --config=/assets/temp/scripts/mirror/imageset.yaml docker://127.0.0.1:5005 --dir assets/temp/oc-mirror1261672872 --dest-use-http 
\ Pulling OpenShift 4.16.1 release images required for installation...DEBUG mirroring result:                            
ERROR Failed to pull OpenShift 4.16.1 release images required for installation 
FATAL failed to fetch Appliance disk image: failed to fetch dependency of "Appliance disk image": failed to generate asset "Data ISO": Failed to execute cmd (oc mirror --config=/assets/temp/scripts/mirror/imageset.yaml docker://127.0.0.1:5005 --dir assets/temp/oc-mirror1261672872 --dest-use-http): Failed to execute cmd (/usr/local/bin/oc mirror --config=/assets/temp/scripts/mirror/imageset.yaml docker://127.0.0.1:5005 --dir assets/temp/oc-mirror1261672872 --dest-use-http): error: pulling from host quay.io failed with status code [manifests latest]: 401 UNAUTHORIZED 
FATAL : exit status 1: Failed to execute cmd (/usr/local/bin/oc mirror --config=/assets/temp/scripts/mirror/imageset.yaml docker://127.0.0.1:5005 --dir assets/temp/oc-mirror1261672872 --dest-use-http): error: pulling from host quay.io failed with status code [manifests latest]: 401 UNAUTHORIZED 
FATAL : exit status 1                              

From the container's oc-mirror log:

# tail -f .oc-mirror.log 
[...]
info: Planning completed in 4.51s
uploading: 127.0.0.1:5005/openshift/release sha256:87716e5cbd9577c7b126ab7ee700e642fc16af63cb964522f62746723addb3ab 24.03KiB
[...]
info: Mirroring completed in 3m42.75s (14.6MB/s)
Writing image mapping to assets/temp/oc-mirror2319087030/results-1721062290/mapping.txt
Writing ICSP manifests to assets/temp/oc-mirror2319087030/results-1721062290
Checking push permissions for 127.0.0.1:5005
Creating directory: assets/temp/oc-mirror2713840876/src/publish
Creating directory: assets/temp/oc-mirror2713840876/src/v2
Creating directory: assets/temp/oc-mirror2713840876/src/charts
Creating directory: assets/temp/oc-mirror2713840876/src/release-signatures
backend is not configured in /assets/temp/scripts/mirror/imageset.yaml, using stateless mode
backend is not configured in /assets/temp/scripts/mirror/imageset.yaml, using stateless mode
No metadata detected, creating new workspace

What did you expect to happen?

oc-mirror and the appliance builder should use the appropriate token when mirroring images.

How to reproduce it (as minimally and precisely as possible)?

I have not tried to construct a minimal reproducer without using the appliance builder, but see above for my reproducer steps. In the example config given, quay.io/my_org/my_image:latest is a private image and the corresponding pull secret appears with the matching key "quay.io/my_org/my_image".

Anything else we need to know?

References

openshift-bot commented 3 weeks ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale