Closed clyso closed 4 years ago
We use the minio client to connect to the S3 buckets, we haven't specifically tried ceph s3 buckets, so I don't know if it works. According to your report it doesn't so I would call it a bug.
@awels acccording to this closed issue #389 it was planned to be supported - but it looks like it was never implemented, right?
We have an S3 data source https://github.com/kubevirt/containerized-data-importer/blob/master/pkg/importer/s3-datasource.go and instead of http source, you should set s3 source to access it. I don't see an example I can give you though, but if you look at https://github.com/kubevirt/containerized-data-importer/blob/master/doc/datavolumes.md#https3registry-source and instead of source: http: do source: s3: it should use the s3 data source.
@awels thanks for poiting me to the s3 datasource code. So it should actually work. If you look at the yaml I used to create the import I actually have been using the s3 source:
spec: source: s3: url: "https://rgw.XXX.XXX/images/gardenlinux.raw"
But the Pod that was created is using http as source instead of s3:
Environment: IMPORTER_SOURCE: http IMPORTER_ENDPOINT: https://rgw.XXX.XXX/images/gardenlinux.raw
So this is indeed a bug as you said in the first place 👍
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
@clyso: Closing this issue.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
are other S3 Endpoints besides AWS already supported, e.g. ceph S3? I tried to import from s3 but importer pod is stuck in crashloop:
Containers: importer: Container ID: docker://1d70117b499af141e16eb4e2b53ae41c05a37f2e70dba1c8a672434ec124ec0d Image: kubevirt/cdi-importer:v1.16.0 Image ID: docker-pullable://kubevirt/cdi-importer@sha256:8fb298b8c81e1bdb3b48abebf28666ad0c206604871d5d6a3e20cb5755500af9 Port: 8443/TCP Host Port: 0/TCP Args: -v=1 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Message: Unable to connect to http data source: expected status code 200, got 400. Status: 400 Bad Request Exit Code: 1 Started: Tue, 05 May 2020 15:39:00 +0200 Finished: Tue, 05 May 2020 15:39:00 +0200 Ready: False Restart Count: 7 Environment: IMPORTER_SOURCE: http IMPORTER_ENDPOINT: https://rgw.XXX.XXX/images/gardenlinux.raw IMPORTER_CONTENTTYPE: kubevirt IMPORTER_IMAGE_SIZE: 10Gi OWNER_UID: 3ff37be2-4370-4895-bae9-3aa463bbb239 INSECURE_TLS: false IMPORTER_DISK_ID: IMPORTER_ACCESS_KEY_ID: <set to the key 'accessKeyId' in secret 's3-rot'> Optional: false IMPORTER_SECRET_KEY: <set to the key 'secretKey' in secret 's3-rot'> Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4mbwf (ro) Devices: /dev/cdi-block-volume from cdi-data-vol
What you expected to happen:
Volume gets created from S3 Source
How to reproduce it (as minimally and precisely as possible):
kubectl create: apiVersion: v1 kind: Secret metadata: name: s3-rot labels: app: containerized-data-importer type: Opaque data: accessKeyId: "XXXX" secretKey: "XXXX" kubectl create: apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: gardenlinux-s3 spec: source: s3: url: "https://rgw.XXX.XXX/images/gardenlinux.raw" secretRef: "s3-rot" pvc: storageClassName: csi-rbd-sc volumeMode: Block accessModes:
Anything else we need to know?:
Secret created with accessKeyId & secretKey base64 encoded
Environment:
kubectl get deployments cdi-deployment -o yaml
): v1.16.0kubectl version
): v1.18.2