kubevirt / kubevirt-velero-plugin

Plugin to Velero which automates backing up and restoring KubeVirt/CDI objects
Apache License 2.0
30 stars 28 forks source link

IP is empty in restored VM because of MAC address conflict #214

Closed 27149chen closed 3 months ago

27149chen commented 8 months ago

What happened: IP is empty in restored VM because of MAC address conflict

What you expected to happen: IP is set correctly

How to reproduce it (as minimally and precisely as possible):

  1. backup a VM with one pvc (populated by a dataVolume, with centos7 in it)
  2. restore it to another namespace (pvc data is restored by restic in velero)
  3. new vmi is started in the same node

Additional context:

ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 6  bytes 416 (416.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 416 (416.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ip link show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 1e:90:ea:e6:c9:16 brd ff:ff:ff:ff:ff:ff

mac is 1e:90:ea:e6:c9:16

interfaces in vmi status

interfaces:
    - infoSource: domain, guest-agent
      interfaceName: eth0
      mac: 1e:90:ea:e6:c9:16
      name: default
      queueCount: 1

mac is 1e:90:ea:e6:c9:16

but in /etc/sysconfig/network-scripts/ifcfg-eth0

# Created by cloud-init on instance boot automatically, do not edit.
#
BOOTPROTO=dhcp
DEVICE=eth0
HWADDR=f6:91:68:c6:76:ab
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no

mac is f6:91:68:c6:76:ab, which is the same with the origin VM

Environment:

kubevirt-bot commented 5 months ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot commented 4 months ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot commented 3 months ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubevirt-bot commented 3 months ago

@kubevirt-bot: Closing this issue.

In response to [this](https://github.com/kubevirt/kubevirt-velero-plugin/issues/214#issuecomment-2198201379): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.