Closed mkuurstra closed 1 year ago
I think https://github.com/ipspace/netlab/issues/701 could fix this
Thanks a million for reporting this. It's an interesting side effect of how vagrant-libvirt behavior changed over time.
When the netlab project started, vagrant-libvirt required a predefined virtual switch for the management network, so the installation script created one. However, later versions of the same plugin deleted that virtual switch every time vagrant destroy
is executed, so it was safe to use the same address range with containerlab as the management network.
In your case, executing netlab test libvirt
should solve the problem (as netlab down
calling vagrant destroy
will remove the management virtual switch and its IP subnet).
The solution is pretty simple: do not create the vagrant-libvirt
network during netlab install libvirt
process, as it's automatically created by netlab up
and destroyed by vagrant destroy
. Will fix...
You also opened a very interesting can of worms I haven't thought about when implementing #706 :( That will be a doozie...
Describe the bug
Containerlab does not work out-of-the-box
To Reproduce
topology.yml
with:netlab up
Workaround
Add
defaults.addressing.mgmt.ipv4: 192.168.123.0/24
to the topology YAML. Now a connection is possible and the IP interfaces look sane:Expected behavior
I think netlab using clab should work with the supplied defaults. It seems like this was envisioned because a default is supplied here but is seems to get overwritten by the value here
Output
Version
Additional context
Add any other context about the problem here.