kubeovn / kube-ovn

A Bridge between SDN and Cloud Native (Project under CNCF)
https://kubeovn.github.io/docs/stable/en/
Apache License 2.0
1.96k stars 450 forks source link

1.13.0,自定义VPC,使用kubevirt创建的虚机,解绑安全组后,安全组还是生效 #3853

Open geniusxiong opened 7 months ago

geniusxiong commented 7 months ago

Bug Report

1.13.0,自定义VPC,kubevirt创建的虚机,解绑安全组后,安全组还是生效

Expected Behavior

kubevirt创建的虚机,解绑安全组后,安全组不生效

Actual Behavior

Steps to Reproduce the Problem

  1. 倆虚机,10.50.1.17和10.50.1.9,10.50.1.9是能正常ssh到10.50.1.17 image

  2. 创建安全组(禁止tcp22端口访问)

    apiVersion: kubeovn.io/v1
    kind: SecurityGroup
    metadata:
    annotations:
    kubesphere.io/creator: sean
    kubesphere.io/description: ""
    creationTimestamp: "2024-03-13T16:14:29Z"
    generation: 7
    managedFields:
    - apiVersion: kubeovn.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubesphere.io/alias-name: {}
          f:kubesphere.io/creator: {}
          f:kubesphere.io/description: {}
      f:spec:
        .: {}
        f:allowSameGroupTraffic: {}
        f:egressRules: {}
        f:ingressRules: {}
    manager: ksc-net-apiserver
    operation: Update
    time: "2024-03-21T16:27:11Z"
    - apiVersion: kubeovn.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        .: {}
        f:allowSameGroupTraffic: {}
        f:egressLastSyncSuccess: {}
        f:egressMd5: {}
        f:ingressLastSyncSuccess: {}
        f:ingressMd5: {}
        f:portGroup: {}
    manager: kube-ovn-controller
    operation: Update
    time: "2024-03-21T16:27:11Z"
    name: sg-zcf3qmd8
    resourceVersion: "83473302"
    selfLink: /apis/kubeovn.io/v1/security-groups/sg-zcf3qmd8
    uid: c75a5fb6-e12c-429f-9fa0-2b94452d3080
    spec:
    allowSameGroupTraffic: true
    egressRules:
    - ipVersion: ipv4
    policy: allow
    priority: 150
    protocol: all
    remoteAddress: 0.0.0.0/0
    remoteType: address
    ingressRules:
    - ipVersion: ipv4
    policy: drop
    priority: 150
    protocol: all
    remoteAddress: 0.0.0.0/0
    remoteType: address
    - ipVersion: ipv4
    policy: drop
    portRangeMax: 22
    portRangeMin: 22
    priority: 100
    protocol: tcp
    remoteAddress: 0.0.0.0/0
    remoteType: address
    - ipVersion: ipv4
    policy: allow
    priority: 99
    protocol: icmp
    remoteAddress: 0.0.0.0/0
    remoteType: address
    - ipVersion: ipv4
    policy: allow
    portRangeMax: 80
    portRangeMin: 80
    priority: 101
    protocol: tcp
    remoteAddress: 0.0.0.0/0
    remoteType: address
    - ipVersion: ipv4
    policy: allow
    portRangeMax: 443
    portRangeMin: 443
    priority: 102
    protocol: tcp
    remoteAddress: 0.0.0.0/0
    remoteType: address
    status:
    allowSameGroupTraffic: true
    egressLastSyncSuccess: true
    egressMd5: 61496cd990150c3857a7fe00fb8a76b8
    ingressLastSyncSuccess: true
    ingressMd5: 836e17c4cc665817b22a21c958f350ef
    portGroup: ovn.sg.sg.zcf3qmd8
  3. kubevirt 创建虚机并绑定安全组 虚机的yaml

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
    annotations:
    kubesecNetwork: bridge
    kubesecSvcType: ""
    kubesphere.io/alias-name: cvpc-vm-1
    kubesphere.io/creator: sean
    kubevirt.io/latest-observed-api-version: v1
    kubevirt.io/storage-observed-api-version: v1alpha3
    creationTimestamp: "2024-03-13T15:20:57Z"
    generation: 11
    labels:
    image.ksvm.io: img-alhn7eto
    kubevirt.io/domain: i-4xtuup8f
    kubevirt.io/vm: i-4xtuup8f
    virtualization.kubesphere.io/os-family: centos
    virtualization.kubesphere.io/os-platform: linux
    virtualization.kubesphere.io/os-version: 7.9_64bit
    managedFields:
    - apiVersion: kubevirt.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubesecNetwork: {}
          f:kubesecSvcType: {}
          f:kubesphere.io/alias-name: {}
          f:kubesphere.io/creator: {}
        f:labels:
          .: {}
          f:bs.network.io/security_groups: {}
          f:bs.network.io/subnets: {}
          f:image.ksvm.io: {}
          f:kubevirt.io/domain: {}
          f:kubevirt.io/vm: {}
          f:virtualization.kubesphere.io/os-family: {}
          f:virtualization.kubesphere.io/os-platform: {}
          f:virtualization.kubesphere.io/os-version: {}
        f:ownerReferences: {}
      f:spec:
        .: {}
        f:template:
          .: {}
          f:metadata:
            .: {}
            f:annotations:
              .: {}
              f:ovn.kubernetes.io/port_security: {}
              f:ovn.kubernetes.io/security_groups: {}
            f:creationTimestamp: {}
            f:labels:
              .: {}
              f:kubevirt.io/domain: {}
              f:kubevirt.io/vm: {}
            f:name: {}
            f:namespace: {}
          f:spec:
            .: {}
            f:dnsConfig:
              .: {}
              f:nameservers: {}
            f:dnsPolicy: {}
            f:domain:
              .: {}
              f:cpu:
                .: {}
                f:cores: {}
              f:devices:
                .: {}
                f:disks: {}
                f:interfaces: {}
              f:resources:
                .: {}
                f:requests:
                  .: {}
                  f:memory: {}
            f:networks: {}
            f:terminationGracePeriodSeconds: {}
            f:volumes: {}
    manager: manager
    operation: Update
    time: "2024-03-14T17:42:39Z"
    - apiVersion: kubevirt.io/v1alpha3
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:kubevirt.io/latest-observed-api-version: {}
          f:kubevirt.io/storage-observed-api-version: {}
      f:spec:
        f:runStrategy: {}
      f:status:
        .: {}
        f:conditions: {}
        f:created: {}
        f:printableStatus: {}
        f:ready: {}
        f:volumeSnapshotStatuses: {}
    manager: Go-http-client
    operation: Update
    time: "2024-03-15T08:19:31Z"
    name: i-4xtuup8f
    namespace: vpc1-ns
    ownerReferences:
    - apiVersion: kubevrt.bosssoft.com/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Virtualmachines
    name: i-4xtuup8f
    uid: 8dcc5091-c5f2-4ca1-8f40-4c13c6752a05
    resourceVersion: "78470693"
    selfLink: /apis/kubevirt.io/v1/namespaces/vpc1-ns/virtualmachines/i-4xtuup8f
    uid: 5e615705-9884-496f-92da-d81dbbb0658b
    spec:
    runStrategy: Always
    template:
    metadata:
      annotations:
        ovn.kubernetes.io/port_security: "true"
        ovn.kubernetes.io/security_groups: sg-zcf3qmd8
      creationTimestamp: null
      labels:
        kubevirt.io/domain: i-4xtuup8f
        kubevirt.io/vm: i-4xtuup8f
      name: i-4xtuup8f
      namespace: vpc1-ns
    spec:
      dnsConfig:
        nameservers:
        - 58.22.96.66
      dnsPolicy: None
      domain:
        cpu:
          cores: 2
        devices:
          disks:
          - disk:
              bus: virtio
            name: disk-0
          - disk:
              bus: virtio
            name: cloudinitdisk
          interfaces:
          - bridge: {}
            name: i-4xtuup8f
        machine:
          type: q35
        resources:
          requests:
            memory: 4Gi
      networks:
      - name: i-4xtuup8f
        pod: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - dataVolume:
          name: datavolume-lcppwsbd
        name: disk-0
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            updates:
            network:
            when: ['boot']
            timezone: Asia/Shanghai
            packages:
            - cloud-init
            package_update: true
            ssh_pwauth: true
            disable_root: false
            chpasswd: {"list":"root:123456",expire: False}
            runcmd:
            - sed -i "/PermitRootLogin/s/^.*$/PermitRootLogin yes/g" /etc/ssh/sshd_config
            - systemctl restart sshd.service
        name: cloudinitdisk
    status:
    conditions:
    - lastProbeTime: null
    lastTransitionTime: "2024-03-15T08:19:16Z"
    status: "True"
    type: Ready
    - lastProbeTime: null
    lastTransitionTime: null
    message: 'cannot migrate VMI: PVC datavolume-lcppwsbd is not shared, live migration
      requires that all PVCs must be shared (using ReadWriteMany access mode)'
    reason: DisksNotLiveMigratable
    status: "False"
    type: LiveMigratable
    - lastProbeTime: "2024-03-15T08:19:31Z"
    lastTransitionTime: null
    status: "True"
    type: AgentConnected
    created: true
    printableStatus: Running
    ready: true
    volumeSnapshotStatuses:
    - enabled: false
    name: disk-0
    reason: 'No VolumeSnapshotClass: Volume snapshots are not configured for this
      StorageClass [local] [disk-0]'
    - enabled: false
    name: cloudinitdisk
    reason: Snapshot is not supported for this volumeSource type [cloudinitdisk]

virt-launcher的yaml

kind: Pod
apiVersion: v1
metadata:
  name: virt-launcher-i-4xtuup8f-vftdk
  generateName: virt-launcher-i-4xtuup8f-
  namespace: vpc1-ns
  labels:
    kubevirt.io: virt-launcher
    kubevirt.io/created-by: b5c9fc03-7472-48db-bb0c-0f84ac70093a
    kubevirt.io/domain: i-4xtuup8f
    kubevirt.io/vm: i-4xtuup8f
  annotations:
    kubevirt.io/domain: i-4xtuup8f
    kubevirt.io/migrationTransportUnix: 'true'
    ovn.kubernetes.io/allocated: 'true'
    ovn.kubernetes.io/cidr: 10.50.1.0/24
    ovn.kubernetes.io/gateway: 10.50.1.1
    ovn.kubernetes.io/ip_address: 10.50.1.17
    ovn.kubernetes.io/logical_router: vpc1
    ovn.kubernetes.io/logical_switch: vpc1-subnet1
    ovn.kubernetes.io/mac_address: '00:00:00:F8:3E:A2'
    ovn.kubernetes.io/pod_nic_type: veth-pair
    ovn.kubernetes.io/port_security: 'true'
    ovn.kubernetes.io/routed: 'true'
    ovn.kubernetes.io/security_groups: sg-zcf3qmd8
    ovn.kubernetes.io/virtualmachine: i-4xtuup8f
    post.hook.backup.velero.io/command: >-
      ["/usr/bin/virt-freezer", "--unfreeze", "--name", "i-4xtuup8f",
      "--namespace", "vpc1-ns"]
    post.hook.backup.velero.io/container: compute
    pre.hook.backup.velero.io/command: >-
      ["/usr/bin/virt-freezer", "--freeze", "--name", "i-4xtuup8f",
      "--namespace", "vpc1-ns"]
    pre.hook.backup.velero.io/container: compute
spec:
  volumes:
    - name: private
      emptyDir: {}
    - name: public
      emptyDir: {}
    - name: sockets
      emptyDir: {}
    - name: disk-0
      persistentVolumeClaim:
        claimName: datavolume-lcppwsbd
    - name: virt-bin-share-dir
      emptyDir: {}
    - name: libvirt-runtime
      emptyDir: {}
    - name: ephemeral-disks
      emptyDir: {}
    - name: container-disks
      emptyDir: {}
    - name: hotplug-disks
      emptyDir: {}
  containers:
    - name: compute
      image: 'quay.io/kubevirt/virt-launcher:v0.50.0'
      command:
        - /usr/bin/virt-launcher
        - '--qemu-timeout'
        - 328s
        - '--name'
        - i-4xtuup8f
        - '--uid'
        - b5c9fc03-7472-48db-bb0c-0f84ac70093a
        - '--namespace'
        - vpc1-ns
        - '--kubevirt-share-dir'
        - /var/run/kubevirt
        - '--ephemeral-disk-dir'
        - /var/run/kubevirt-ephemeral-disks
        - '--container-disk-dir'
        - /var/run/kubevirt/container-disks
        - '--grace-period-seconds'
        - '45'
        - '--hook-sidecars'
        - '0'
        - '--ovmf-path'
        - /usr/share/OVMF
      env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
      resources:
        limits:
          devices.kubevirt.io/kvm: '1'
          devices.kubevirt.io/tun: '1'
          devices.kubevirt.io/vhost-net: '1'
        requests:
          cpu: 200m
          devices.kubevirt.io/kvm: '1'
          devices.kubevirt.io/tun: '1'
          devices.kubevirt.io/vhost-net: '1'
          ephemeral-storage: 50M
          memory: '4490002433'
      volumeMounts:
        - name: private
          mountPath: /var/run/kubevirt-private
        - name: public
          mountPath: /var/run/kubevirt
        - name: ephemeral-disks
          mountPath: /var/run/kubevirt-ephemeral-disks
        - name: container-disks
          mountPath: /var/run/kubevirt/container-disks
          mountPropagation: HostToContainer
        - name: hotplug-disks
          mountPath: /var/run/kubevirt/hotplug-disks
          mountPropagation: HostToContainer
        - name: libvirt-runtime
          mountPath: /var/run/libvirt
        - name: sockets
          mountPath: /var/run/kubevirt/sockets
        - name: disk-0
          mountPath: /var/run/kubevirt-private/vmi-disks/disk-0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      securityContext:
        capabilities:
          add:
            - NET_BIND_SERVICE
            - SYS_NICE
          drop:
            - NET_RAW
        privileged: false
        runAsUser: 0
  restartPolicy: Never
  terminationGracePeriodSeconds: 60
  dnsPolicy: None
  nodeSelector:
    kubevirt.io/schedulable: 'true'
  serviceAccountName: default
  serviceAccount: default
  automountServiceAccountToken: false
  nodeName: master-0
  securityContext:
    seLinuxOptions:
      type: virt_launcher.process
    runAsUser: 0
  hostname: i-4xtuup8f
  schedulerName: default-scheduler
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
  priority: 0
  dnsConfig:
    nameservers:
      - 58.22.96.66
  readinessGates:
    - conditionType: kubevirt.io/virtual-machine-unpaused
  enableServiceLinks: false
  1. 绑定安全组,重启虚机后,安全组生效,ssh不成功 image

  2. 解绑安全组,并重启虚机

  3. 重启后 虚机的yaml

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
    annotations:
    kubesecNetwork: bridge
    kubesecSvcType: ""
    kubesphere.io/alias-name: cvpc-vm-1
    kubesphere.io/creator: sean
    kubevirt.io/latest-observed-api-version: v1
    kubevirt.io/storage-observed-api-version: v1alpha3
    creationTimestamp: "2024-03-13T15:20:57Z"
    generation: 12
    labels:
    image.ksvm.io: img-alhn7eto
    kubevirt.io/domain: i-4xtuup8f
    kubevirt.io/vm: i-4xtuup8f
    virtualization.kubesphere.io/os-family: centos
    virtualization.kubesphere.io/os-platform: linux
    virtualization.kubesphere.io/os-version: 7.9_64bit
    managedFields:
    - apiVersion: kubevirt.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubesecNetwork: {}
          f:kubesecSvcType: {}
          f:kubesphere.io/alias-name: {}
          f:kubesphere.io/creator: {}
        f:labels:
          .: {}
          f:bs.network.io/security_groups: {}
          f:bs.network.io/subnets: {}
          f:image.ksvm.io: {}
          f:kubevirt.io/domain: {}
          f:kubevirt.io/vm: {}
          f:virtualization.kubesphere.io/os-family: {}
          f:virtualization.kubesphere.io/os-platform: {}
          f:virtualization.kubesphere.io/os-version: {}
        f:ownerReferences: {}
      f:spec:
        .: {}
        f:template:
          .: {}
          f:metadata:
            .: {}
            f:creationTimestamp: {}
            f:labels:
              .: {}
              f:kubevirt.io/domain: {}
              f:kubevirt.io/vm: {}
            f:name: {}
            f:namespace: {}
          f:spec:
            .: {}
            f:dnsConfig:
              .: {}
              f:nameservers: {}
            f:dnsPolicy: {}
            f:domain:
              .: {}
              f:cpu:
                .: {}
                f:cores: {}
              f:devices:
                .: {}
                f:disks: {}
                f:interfaces: {}
              f:resources:
                .: {}
                f:requests:
                  .: {}
                  f:memory: {}
            f:networks: {}
            f:terminationGracePeriodSeconds: {}
            f:volumes: {}
    manager: manager
    operation: Update
    time: "2024-03-14T17:42:39Z"
    - apiVersion: kubevirt.io/v1alpha3
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:kubevirt.io/latest-observed-api-version: {}
          f:kubevirt.io/storage-observed-api-version: {}
      f:spec:
        f:runStrategy: {}
      f:status:
        .: {}
        f:conditions: {}
        f:created: {}
        f:printableStatus: {}
        f:ready: {}
        f:volumeSnapshotStatuses: {}
    manager: Go-http-client
    operation: Update
    time: "2024-03-21T16:48:52Z"
    name: i-4xtuup8f
    namespace: vpc1-ns
    ownerReferences:
    - apiVersion: kubevrt.bosssoft.com/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Virtualmachines
    name: i-4xtuup8f
    uid: 8dcc5091-c5f2-4ca1-8f40-4c13c6752a05
    resourceVersion: "83485226"
    selfLink: /apis/kubevirt.io/v1/namespaces/vpc1-ns/virtualmachines/i-4xtuup8f
    uid: 5e615705-9884-496f-92da-d81dbbb0658b
    spec:
    runStrategy: Always
    template:
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/domain: i-4xtuup8f
        kubevirt.io/vm: i-4xtuup8f
      name: i-4xtuup8f
      namespace: vpc1-ns
    spec:
      dnsConfig:
        nameservers:
        - 58.22.96.66
      dnsPolicy: None
      domain:
        cpu:
          cores: 2
        devices:
          disks:
          - disk:
              bus: virtio
            name: disk-0
          - disk:
              bus: virtio
            name: cloudinitdisk
          interfaces:
          - bridge: {}
            name: i-4xtuup8f
        machine:
          type: q35
        resources:
          requests:
            memory: 4Gi
      networks:
      - name: i-4xtuup8f
        pod: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - dataVolume:
          name: datavolume-lcppwsbd
        name: disk-0
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            updates:
            network:
            when: ['boot']
            timezone: Asia/Shanghai
            packages:
            - cloud-init
            package_update: true
            ssh_pwauth: true
            disable_root: false
            chpasswd: {"list":"root:123456",expire: False}
            runcmd:
            - sed -i "/PermitRootLogin/s/^.*$/PermitRootLogin yes/g" /etc/ssh/sshd_config
            - systemctl restart sshd.service
        name: cloudinitdisk
    status:
    conditions:
    - lastProbeTime: null
    lastTransitionTime: "2024-03-21T16:48:40Z"
    status: "True"
    type: Ready
    - lastProbeTime: null
    lastTransitionTime: null
    message: 'cannot migrate VMI: PVC datavolume-lcppwsbd is not shared, live migration
      requires that all PVCs must be shared (using ReadWriteMany access mode)'
    reason: DisksNotLiveMigratable
    status: "False"
    type: LiveMigratable
    - lastProbeTime: "2024-03-21T16:48:52Z"
    lastTransitionTime: null
    status: "True"
    type: AgentConnected
    created: true
    printableStatus: Running
    ready: true
    volumeSnapshotStatuses:
    - enabled: false
    name: disk-0
    reason: 'No VolumeSnapshotClass: Volume snapshots are not configured for this
      StorageClass [local] [disk-0]'
    - enabled: false
    name: cloudinitdisk
    reason: Snapshot is not supported for this volumeSource type [cloudinitdisk]

virt-launcher的yaml

kind: Pod
apiVersion: v1
metadata:
  name: virt-launcher-i-4xtuup8f-8rtgz
  generateName: virt-launcher-i-4xtuup8f-
  namespace: vpc1-ns
  labels:
    kubevirt.io: virt-launcher
    kubevirt.io/created-by: ae4074df-6b1e-4eee-9909-5acca809635d
    kubevirt.io/domain: i-4xtuup8f
    kubevirt.io/vm: i-4xtuup8f
  annotations:
    kubevirt.io/domain: i-4xtuup8f
    kubevirt.io/migrationTransportUnix: 'true'
    ovn.kubernetes.io/allocated: 'true'
    ovn.kubernetes.io/cidr: 10.50.1.0/24
    ovn.kubernetes.io/gateway: 10.50.1.1
    ovn.kubernetes.io/ip_address: 10.50.1.17
    ovn.kubernetes.io/logical_router: vpc1
    ovn.kubernetes.io/logical_switch: vpc1-subnet1
    ovn.kubernetes.io/mac_address: '00:00:00:F8:3E:A2'
    ovn.kubernetes.io/pod_nic_type: veth-pair
    ovn.kubernetes.io/routed: 'true'
    ovn.kubernetes.io/virtualmachine: i-4xtuup8f
    post.hook.backup.velero.io/command: >-
      ["/usr/bin/virt-freezer", "--unfreeze", "--name", "i-4xtuup8f",
      "--namespace", "vpc1-ns"]
    post.hook.backup.velero.io/container: compute
    pre.hook.backup.velero.io/command: >-
      ["/usr/bin/virt-freezer", "--freeze", "--name", "i-4xtuup8f",
      "--namespace", "vpc1-ns"]
    pre.hook.backup.velero.io/container: compute
spec:
  volumes:
    - name: private
      emptyDir: {}
    - name: public
      emptyDir: {}
    - name: sockets
      emptyDir: {}
    - name: disk-0
      persistentVolumeClaim:
        claimName: datavolume-lcppwsbd
    - name: virt-bin-share-dir
      emptyDir: {}
    - name: libvirt-runtime
      emptyDir: {}
    - name: ephemeral-disks
      emptyDir: {}
    - name: container-disks
      emptyDir: {}
    - name: hotplug-disks
      emptyDir: {}
  containers:
    - name: compute
      image: 'quay.io/kubevirt/virt-launcher:v0.50.0'
      command:
        - /usr/bin/virt-launcher
        - '--qemu-timeout'
        - 331s
        - '--name'
        - i-4xtuup8f
        - '--uid'
        - ae4074df-6b1e-4eee-9909-5acca809635d
        - '--namespace'
        - vpc1-ns
        - '--kubevirt-share-dir'
        - /var/run/kubevirt
        - '--ephemeral-disk-dir'
        - /var/run/kubevirt-ephemeral-disks
        - '--container-disk-dir'
        - /var/run/kubevirt/container-disks
        - '--grace-period-seconds'
        - '45'
        - '--hook-sidecars'
        - '0'
        - '--ovmf-path'
        - /usr/share/OVMF
      env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
      resources:
        limits:
          devices.kubevirt.io/kvm: '1'
          devices.kubevirt.io/tun: '1'
          devices.kubevirt.io/vhost-net: '1'
        requests:
          cpu: 200m
          devices.kubevirt.io/kvm: '1'
          devices.kubevirt.io/tun: '1'
          devices.kubevirt.io/vhost-net: '1'
          ephemeral-storage: 50M
          memory: '4490002433'
      volumeMounts:
        - name: private
          mountPath: /var/run/kubevirt-private
        - name: public
          mountPath: /var/run/kubevirt
        - name: ephemeral-disks
          mountPath: /var/run/kubevirt-ephemeral-disks
        - name: container-disks
          mountPath: /var/run/kubevirt/container-disks
          mountPropagation: HostToContainer
        - name: hotplug-disks
          mountPath: /var/run/kubevirt/hotplug-disks
          mountPropagation: HostToContainer
        - name: libvirt-runtime
          mountPath: /var/run/libvirt
        - name: sockets
          mountPath: /var/run/kubevirt/sockets
        - name: disk-0
          mountPath: /var/run/kubevirt-private/vmi-disks/disk-0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      securityContext:
        capabilities:
          add:
            - NET_BIND_SERVICE
            - SYS_NICE
          drop:
            - NET_RAW
        privileged: false
        runAsUser: 0
  restartPolicy: Never
  terminationGracePeriodSeconds: 60
  dnsPolicy: None
  nodeSelector:
    kubevirt.io/schedulable: 'true'
  serviceAccountName: default
  serviceAccount: default
  automountServiceAccountToken: false
  nodeName: master-0
  securityContext:
    seLinuxOptions:
      type: virt_launcher.process
    runAsUser: 0
  hostname: i-4xtuup8f
  schedulerName: default-scheduler
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
  priority: 0
  dnsConfig:
    nameservers:
      - 58.22.96.66
  readinessGates:
    - conditionType: kubevirt.io/virtual-machine-unpaused
  enableServiceLinks: false

可见,虚机的yaml和virt-launcher的yaml上已经没有 metadata: annotations: ovn.kubernetes.io/port_security: "true" ovn.kubernetes.io/security_groups: sg-zcf3qmd8 但是还是安全组还是生效的,无法ssh,正常解绑了应该能够ssh image

Additional Info

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
1.13.0
 NFS Server 4.0 (G193)
4.19.113-3.nfs.x86_64 
github-actions[bot] commented 5 months ago

Issues go stale after 60d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

bobz965 commented 5 months ago

@geniusxiong 大佬,这个bug目前还存在么?

geniusxiong commented 5 months ago

@geniusxiong 大佬,这个bug目前还存在么?

还是存在的

github-actions[bot] commented 3 months ago

Issues go stale after 60d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

github-actions[bot] commented 1 month ago

Issues go stale after 60d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.