After a VM save/restore or other change to the configuration of the network adapter in VirtualBox, the next ~3 connection attempts get a "lost carrier" error logged by leaix, and then everything works as usual.
I also see the same behaviour when running NetBSD 1.1 (using the driver that this one is derived from).
Examining a bit further, if I do say a 10-count ping when in this situation, I consistently get 4 failures and then success:
# ping 8.8.8.8 64 10
02:37:48 leaix: lost carrier
02:37:49 leaix: lost carrier
02:37:50 leaix: lost carrier
02:37:51 leaix: lost carrier
PING 8.8.8.8: 64 data bytes
72 bytes from 8.8.8.8: icmp_seq=4. time=20. ms
72 bytes from 8.8.8.8: icmp_seq=5. time=20. ms
72 bytes from 8.8.8.8: icmp_seq=6. time=20. ms
72 bytes from 8.8.8.8: icmp_seq=7. time=20. ms
72 bytes from 8.8.8.8: icmp_seq=8. time=20. ms
72 bytes from 8.8.8.8: icmp_seq=9. time=20. ms
----8.8.8.8 PING Statistics----
10 packets transmitted, 6 packets received, 40% packet loss
I get this same behaviour regardless of how long I wait between the network configuration event or suspend/resume and starting the ping.
Conjecture: The carrier loss may be something VirtualBox deliberately enforces so that the VM is aware of a possible network topology change, in case dynamic configuration is in use, rather than some kind of unintended behaviour; the best thing to do might be to detect it and get it over with faster.
After a VM save/restore or other change to the configuration of the network adapter in VirtualBox, the next ~3 connection attempts get a
"lost carrier"
error logged byleaix
, and then everything works as usual.I also see the same behaviour when running NetBSD 1.1 (using the driver that this one is derived from).
Examining a bit further, if I do say a 10-count ping when in this situation, I consistently get 4 failures and then success:
I get this same behaviour regardless of how long I wait between the network configuration event or suspend/resume and starting the ping.
Conjecture: The carrier loss may be something VirtualBox deliberately enforces so that the VM is aware of a possible network topology change, in case dynamic configuration is in use, rather than some kind of unintended behaviour; the best thing to do might be to detect it and get it over with faster.