kubevirt / macvtap-cni

A CNI + device plugin combo for virtualization workloads on Kubernetes.
Apache License 2.0
43 stars 20 forks source link

`name` property of `DP_MACVTAP_CONF` can't exceed 10 characters #120

Open tstirmllnl opened 5 months ago

tstirmllnl commented 5 months ago

What happened: The name property of DP_MACVTAP_CONF appears to have a character limit of 10 characters. I'm not sure if this due to it or the annotation that has to be set on the NetworkAttachmentDefinition.

What you expected to happen: I didn't expect this character limit.

How to reproduce it (as minimally and precisely as possible):

  1. Create macvtap device plugin configuration. NOTE: If you make the name field dataplanea and update NetworkAttachmentDefinition to be k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/dataplanea it will work.
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: macvtap-deviceplugin-config
    data:
    DP_MACVTAP_CONF: |
    [
      {
        "name" : "dataplaneab",
        "lowerDevice" : "isol",
        "mode": "bridge",
        "capacity" : 50
      },
    ]
2. Deploy macvtap DaemonSet using: https://github.com/kubevirt/macvtap-cni/blob/main/manifests/macvtap.yaml 
3. Deploy `NetworkAttachmentDefinition`
```yaml
kind: NetworkAttachmentDefinition
apiVersion: k8s.cni.cncf.io/v1
metadata:
  name: isolated-net
  annotations:
    k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/dataplaneab
spec:
  config: '{
      "cniVersion": "0.3.1",
      "name": "isolated-net",
      "type": "macvtap",
      "ipam": {
              "type": "host-local",
              "subnet": "172.31.0.0/20",
              "rangeStart": "172.31.12.1",
              "rangeEnd": "172.31.15.254",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "172.31.0.1"
            }
    }'
  1. Deploy VM

    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstance
    metadata:
    name: vmi-test
    spec:
    domain:
    resources:
      requests:
        memory: 64M
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio
      - name: cloudinitdisk
        disk:
          bus: virtio
      interfaces:
        - name: isolated-network
          macvtap: {}
    networks:
    - name: isolated-network
      multus:
        networkName: isolated-net
    volumes:
    - name: containerdisk
      containerDisk:
        image: kubevirt/cirros-container-disk-demo:latest
    - name: cloudinitdisk
      cloudInitNoCloud:
        userData: |
            #!/bin/sh
    
            echo 'printed from cloud-init userdata'

kubectl describe prints out

Status:           Failed
Reason:           UnexpectedAdmissionError
Message:          Pod Allocate failed due to rpc error: code = Unknown desc = numerical result out of range, which is unexpected

Looking at the node where its scheduled it looks like the macvtap wasn't allocated.

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                                  Requests    Limits
  --------                                  --------    ------
  cpu                                       702m (1%)   770m (1%)
  memory                                    815Mi (0%)  320Mi (0%)
  ephemeral-storage                         0 (0%)      0 (0%)
  hugepages-1Gi                             0 (0%)      0 (0%)
  hugepages-2Mi                             0 (0%)      0 (0%)
  devices.kubevirt.io/kvm                   0           0
  macvtap.network.kubevirt.io/dataplane      0           0
  macvtap.network.kubevirt.io/dataplanea     0           0
  macvtap.network.kubevirt.io/dataplaneab    0           0

NOTE: It also looks like it leaves macvtap.network.kubevirt.io/ resources from previous runs. How does one remove these?

Additional context: Add any other context about the problem here.

Environment:

kubevirt-bot commented 2 months ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

maiqueb commented 2 months ago

/remove-lifecycle stale

Hi, thanks for opening this issue. Sorry it took so long to get me to look at it.

I'll investigate the length limitation of the resource name. I'm inclined to say it is the name of the resource in the annotation that indirectly has this limitation, but right now I'm not sure.

It also looks like it leaves macvtap.network.kubevirt.io/ resources from previous runs. How does one remove these?

IIRC (been too long since I looked at this project ...) we never implemented support for this. I do agree that it is a known bug.

Would you mind opening a new one ?

tstirmllnl commented 2 months ago

@maiqueb No worries. New issue created about macvtap.network.kubevirt.io/ not getting cleared between runs has been created here: https://github.com/kubevirt/macvtap-cni/issues/121