kubevirt / containerized-data-importer

Data Import Service for kubernetes, designed with kubevirt in mind.
Apache License 2.0
420 stars 261 forks source link

DataSource never become ready #2688

Closed kvaps closed 10 months ago

kvaps commented 1 year ago

What happened:

I just follow doced os-image-poll-and-update.md

# kubectl create ns golden-images
namespace/golden-images created
# kubectl create -f 1.yaml
dataimportcron.cdi.kubevirt.io/fedora-image-import-cron created
# kubectl create -f 2.yaml
Error from server: error when creating "2.yaml": admission webhook "datavolume-validate.cdi.kubevirt.io" denied the request:  Empty source field in 'fedora'. DataSource may not be ready yet

What you expected to happen:

DataVolume successfully created

How to reproduce it (as minimally and precisely as possible):

Additional context:

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataImportCron
metadata:
  creationTimestamp: "2023-04-12T09:38:04Z"
  generation: 2
  name: fedora-image-import-cron
  namespace: golden-images
  resourceVersion: "71812828"
  uid: fd83ea4c-7ef3-46b8-b1ae-accbfd0949d7
spec:
  garbageCollect: Outdated
  importsToKeep: 2
  managedDataSource: fedora
  schedule: 30 1 * * 1
  template:
    metadata: {}
    spec:
      source:
        registry:
          certConfigMap: some-certs
          pullMethod: node
          url: docker://quay.io/kubevirt/fedora-cloud-registry-disk-demo:latest
      storage:
        resources:
          requests:
            storage: 5Gi
        storageClassName: hostpath-provisioner
    status: {}
status:
  conditions:
  - lastHeartbeatTime: "2023-04-12T09:38:04Z"
    lastTransitionTime: "2023-04-12T09:38:04Z"
    message: No current import
    reason: NoImport
    status: "False"
    type: Progressing
  - lastHeartbeatTime: "2023-04-12T09:38:04Z"
    lastTransitionTime: "2023-04-12T09:38:04Z"
    message: No source digest
    reason: NoDigest
    status: "False"
    type: UpToDate
---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataSource
metadata:
  creationTimestamp: "2023-04-12T09:38:04Z"
  generation: 2
  labels:
    app.kubernetes.io/component: storage
    app.kubernetes.io/managed-by: cdi-controller
    cdi.kubevirt.io/dataImportCron: fedora-image-import-cron
  name: fedora
  namespace: golden-images
  resourceVersion: "71812826"
  uid: e91477f9-a3a2-4e86-82ba-ab74b96f3db4
spec:
  source: {}
status:
  conditions:
  - lastHeartbeatTime: "2023-04-12T09:38:04Z"
    lastTransitionTime: "2023-04-12T09:38:04Z"
    message: No source PVC set
    reason: NoSource
    status: "False"
    type: Ready
  source: {}

logs:

{"level":"debug","ts":1681293138.8604512,"logger":"controller.dataimportcron-controller","msg":"Checking configmap for host","configMapName":"cdi-insecure-registries","host URL":"quay.io"}
{"level":"info","ts":1681293138.8939974,"logger":"controller.dataimportcron-controller.updateDataSource","msg":"DataSource created","name":"fedora","uid":"0d589964-23ad-46f1-b021-fe266847fa05"}
{"level":"debug","ts":1681293138.9096074,"logger":"controller.dataimportcron-controller","msg":"Checking configmap for host","configMapName":"cdi-insecure-registries","host URL":"quay.io"}
{"level":"info","ts":1681293138.9157608,"logger":"controller.dataimportcron-controller","msg":"Updating CronJob","name":"fedora-image-import-cron-913ce39b"}

Environment:

kvaps commented 1 year ago

Ahh, it creates cronjob in CDI namespace, sorry I was confused by missing events for DataImportCron object. I think we should add the event that import has been started

kvaps commented 1 year ago

And it seems that DataImportCron does not supporting pullMethod: node

akalenyu commented 1 year ago

And it seems that DataImportCron does not supporting pullMethod: node

It should be supported, not sure what goes wrong

certConfigMap: some-certs Do you really need extra certs using this public quay.io image? Did you also create this ConfigMap?

kvaps commented 1 year ago

It should be supported, not sure what goes wrong

Yeah it should work, but it requires me to create secret in both namespaces. Where CDI is running (for cronjob) and where DataVolume is get created (for importer job).

Previously I didn't use any secrets for DataVolumes, because pullMethod: node allows to use secret directly from CRI.

certConfigMap: some-certs Do you really need extra certs using this public quay.io image? Did you also create this ConfigMap?

Not really, but I need registry with secretRef, which works the similar way

awels commented 1 year ago

So the point of pullMethod: node is that you use the node to retrieve the image from the registry, so you don't have to provide a pull secret to the pod. If your node can access the registry, then just setting pullMethod: node without providing a pull secret should work.

kvaps commented 1 year ago

Unfortunately this is not working, since cronjob in cdi namespace also requires this secret.

If I don't use pullMethod: node, then I have to create even two secrets, one in cdi namespace, second in target namespace

aglitke commented 1 year ago

There are two elements to this:

  1. If scheduling is disabled then an initial import should never be attempted
  2. Secrets should only need to be provided once (ie. in the cdi namespace). CDI should manage any secrets that are required for importing by copying them to the relevant namespace.
kubevirt-bot commented 1 year ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

alromeros commented 1 year ago

/remove-lifecycle stale

kubevirt-bot commented 12 months ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot commented 11 months ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot commented 10 months ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubevirt-bot commented 10 months ago

@kubevirt-bot: Closing this issue.

In response to [this](https://github.com/kubevirt/containerized-data-importer/issues/2688#issuecomment-1871130975): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.