kubevirt / containerized-data-importer

Data Import Service for kubernetes, designed with kubevirt in mind.
Apache License 2.0
420 stars 261 forks source link

x509 Error from CDI data importer pod #1274

Closed jishminor closed 3 years ago

jishminor commented 4 years ago

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug

What happened: When specifying a public url to pull a disk image, a cert error occurs, causing the pod to repeatedly restart.

What you expected to happen: Remote disk image pull should not face server verification issues.

How to reproduce it (as minimally and precisely as possible): I am using k3s to test kubevirt. It was installed using the following instructions: On your server node run

sudo apt update && sudo apt install libvirt-clients docker.io
export K3S_VERSION="v1.18.3+k3s1"
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=$K3S_VERSION sh -s - --docker --write-kubeconfig-mode 664

Copy over your kubeconfig from the server node to your dev machine

scp <user>@<server_ip>:/etc/rancher/k3s/k3s.yaml ~/kubevirt.yaml
export KUBECONFIG=~/kubevirt.yaml

# Replace 127.0.0.1 in kubeconfig with public ip of server
vim $KUBECONFIG

On your Mac dev machine now run:

brew install krew
kubectl krew install virt
kubectl create ns kubevirt
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: kubevirt-config
  namespace: kubevirt
  labels:
    kubevirt.io: ""
data:
  feature-gates: "DataVolumes"
EOF
export VERSION=v0.30.2
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt-operator.yaml
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt-cr.yaml

# Run this command repeatedly until condition is met
kubectl -n kubevirt wait kv kubevirt --for condition=Available

export VERSION=$(curl -s https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*")
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

Anything else we need to know?: K3s uses its own local-path storage provisioner which is installed by default. Flannel is also installed as the default cni. It seems that there is something odd about the pod networking when created via k3s, such that it doesn't have the correct certificate info. If you run the docker container locally via docker run --rm -it --entrypoint="" kubevirt/cdi-importer:v1.19.0 bash and wget an image such as https://cloud-images.ubuntu.com/bionic/20200629/bionic-server-cloudimg-amd64.img it pulls the data down fine.

Environment:

awels commented 4 years ago

You can specify a custom CA using a certConfigMap. An example of that is available here: I don't know enough about k3s to give you an intelligent answer on what might be going on in the network that it thinks it requires a different CA. Alternatively if you are just testing, you can use http instead of https. If you post the importer pod log I might have a better though.

jishminor commented 4 years ago

For some more context here is my Kubevirt VM config:

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: ubuntu
spec:
  running: false
  template:
    metadata:
      labels: 
        kubevirt.io/size: small
        kubevirt.io/domain: ubuntu
    spec:
      domain:
        cpu:
          cores: 2
        devices:
          disks:
            - name: datavolumedisk1
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            bridge: {}
        resources:
          requests:
            memory: 2048M
      networks:
      - name: default
        pod: {}
      volumes:
        - dataVolume:
            name: ubuntu-dv
          name: datavolumedisk1
        - name: cloudinitdisk
          cloudInitNoCloud:
            secretRef:
              name: smarter-cloud-init-secret
  dataVolumeTemplates:
  - metadata:
      name: ubuntu-dv
    spec:
      pvc:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
      source:
        http:
          url: https://cloud-images.ubuntu.com/bionic/20200629/bionic-server-cloudimg-amd64.img
        # registry:
        #   url: docker://tedezed/debian-container-disk:9.0

And here are the logs of the importer pod:

I0630 19:46:53.715610       1 importer.go:51] Starting importer
I0630 19:46:53.715684       1 importer.go:112] begin import process
E0630 19:46:53.774464       1 importer.go:118] Get https://cloud-images.ubuntu.com/bionic/20200629/bionic-server-cloudimg-amd64.img: x509: certificate has expired or is not yet valid
HTTP request errored
kubevirt.io/containerized-data-importer/pkg/importer.createHTTPReader
    pkg/importer/http-datasource.go:268
kubevirt.io/containerized-data-importer/pkg/importer.NewHTTPDataSource
    pkg/importer/http-datasource.go:82
main.main
    cmd/cdi-importer/importer.go:116
runtime.main
    GOROOT/src/runtime/proc.go:203
runtime.goexit
    src/runtime/asm_amd64.s:1357
maya-r commented 4 years ago

certificate has expired or is not yet valid

This error makes me think that whatever is fetching has an unsynchronized clock. I don't know how k3s works on Mac, but I assume "runs as a VM, and has its own clock".

Does fetching on the host of the cluster work without errors?

ssh <user>@<server_ip>:/etc/rancher/k3s/k3s.yaml # I assume this is how to connect to it
curl https://cloud-images.ubuntu.com/bionic/20200629/bionic-server-cloudimg-amd64.img
kubevirt-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubevirt-bot commented 3 years ago

@kubevirt-bot: Closing this issue.

In response to [this](https://github.com/kubevirt/containerized-data-importer/issues/1274#issuecomment-738741260): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.