kubevirt / containerized-data-importer

Data Import Service for kubernetes, designed with kubevirt in mind.
Apache License 2.0
409 stars 258 forks source link

Missing clear explanation on how to pull an image from S3 #3240

Open alexnastas opened 4 months ago

alexnastas commented 4 months ago

Description: Trying to setup Minio S3 Storage for qcow2 images. But when creating the DV I get: net/http: invalid header field value for "Authorization"

What I did:

I couldn't find a place in the doc which explains how to set up a DV with S3 backend.

What you expected: The DV should have imported the image since the secret with accessKeyId and secretKey is there

URL: https://kubevirt.io/user-guide/operations/containerized_data_importer/

Additional context: Secret:

apiVersion: v1
kind: Secret
metadata:
  name: s3-image-auth
  labels:
    app: containerized-data-importer
type: Opaque
data:
  accessKeyId: <my_access_key_base64-encoded>
  secretKey: <my_secret_key_base64-encoded>

DV:

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: mydv
spec:
  source:
    s3:
      url: "http://192.168.137.40:9000/f7b5e03d-2321-4927-834a-62329e5a2d1b/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2"
      secretRef: "s3-image-auth"
  pvc:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 11Gi

Error:

 k get dv -o yaml                                                                                                                                                                                                                                            
apiVersion: v1
items:
- apiVersion: cdi.kubevirt.io/v1beta1
  kind: DataVolume
....
  status:
    claimName: mydv
    conditions:
    - lastHeartbeatTime: "2024-05-03T08:59:09Z"
      lastTransitionTime: "2024-05-03T08:59:09Z"
      message: PVC mydv Bound
      reason: Bound
      status: "True"
      type: Bound
    - lastHeartbeatTime: "2024-05-03T09:02:41Z"
      lastTransitionTime: "2024-05-03T08:59:09Z"
      status: "False"
      type: Ready
    - lastHeartbeatTime: "2024-05-03T09:02:41Z"
      lastTransitionTime: "2024-05-03T09:02:41Z"
      message: 'Unable to connect to s3 data source: could not get s3 object: "f7b5e03d-2321-4927-834a-62329e5a2d1b/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2":
        RequestError: send request failed caused by: Get "http://192.168.137.40:9000/f7b5e03d-2321-4927-834a-62329e5a2d1b/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2":
        net/http: invalid header field value for "Authorization"'
      reason: Error
      status: "False"
      type: Running
    phase: ImportInProgress
    progress: N/A
    restartCount: 5
kind: List
metadata:
  resourceVersion: ""
awels commented 4 months ago

You are doing it the correct way AFAICT. The one thing I would double check is to make sure you didn't accidentally base64 endcode \n in the user name and password of the secret.

alexnastas commented 4 months ago

sorry it took a while to come back. No, I didn't encode a '\n'. The way I encoded the the strings was: echo "blahblah" | base64, so I don't think it's that.

awels commented 4 months ago

That will encode the \n from the echo, try echo -n "blahblah" | base64

alexnastas commented 4 months ago

hum... ok. You were absolutely right. This moved me a step further, again didn't find anything on this: datavolume-import-controller (combined from similar events): Unable to connect to s3 data source: could not get s3 object: "f7b5e03d-2321-4927-834a-62329e5a2d1b/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2": AuthorizationHeaderMalformed: The authorization header is malformed; the region is wrong; expecting 'eu-west-1'. status code: 400, request id: 17CE30212E3DA6AA, host id: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8

I added region: <eu-west-1 base64encoded> in the same secret along with the encoded accessKey and secret, however this didn't solve anything. Not sure if this was the right place to specify this.

awels commented 4 months ago

I just checked the code, and it doesn't look like it supports any S3 buckets besides an actual AWS one. It should not be terribly hard to allow it to work. We just never got around to it.

kubevirt-bot commented 1 month ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot commented 1 week ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten