Open clementperon opened 11 months ago
Hi @clementperon,
Error response from daemon: Pool overlaps with other one on this address space
This come from the daemon's default IPAM driver. Once a subnet is reserved for a specific network, the driver will mark this subnet as unavailable and next time a network tries to use it, it will fail.
I would like to create multiple macvlan network with the same subnet for each interfaces.
Could you describe what you're trying to do? Also, how would you expect interfaces, routes, etc... to be configured?
Hi @akerouanton,
Hi @clementperon,
Error response from daemon: Pool overlaps with other one on this address space
This come from the daemon's default IPAM driver. Once a subnet is reserved for a specific network, the driver will mark this subnet as unavailable and next time a network tries to use it, it will fail.
Yes, I understand that, but two subnet that belongs to two different interfaces should not interfer right?
I would like to create multiple macvlan network with the same subnet for each interfaces.
Could you describe what you're trying to do? Also, how would you expect interfaces, routes, etc... to be configured?
I'm spawing containers that are managing Devices Under Test. Each DUT is connected to a dedicated interface and have a default 192.168.1.201 IP after a factory reset.
To be able to have a replicable test, I configure my DUT to have the same Device IP and they communicate to the same Server IP. But each container have a dedicated interface.
CONTAINER1 (192.168.1.1/24) <--ENO1--> DUT1 (192.168.1.201/24) CONTAINER2 (192.168.1.1/24) <--ENO2--> DUT2 (192.168.1.201/24) CONTAINER3 (192.168.1.1/24) <--ENO3--> DUT3 (192.168.1.201/24)
Macvlan are associated to a different interface thus it should not be an issue to have the same IP range for different macvlan.
Macvlan are associated to a different interface thus it should not be an issue to have the same IP range for different macvlan.
Actually dockerd doesn't allow that because it'd be a source of connectivity issues if a container was connected to both networks.
Nonetheless, I thought it'd be possible to use the null
IPAM driver to statically assign subnets and IP addresses with no validation whatsoever but it seems the macvlan / ipvlan drivers implicitly disallow its use. That's something we'd need to fix.
For now, unfortunately the workaround is to not use docker's networking features (ie. --network=host
) and do it yourself.
Macvlan are associated to a different interface thus it should not be an issue to have the same IP range for different macvlan.
Actually dockerd doesn't allow that because it'd be a source of connectivity issues if a container was connected to both networks.
Agree, but they aren't :). So the check is only looking at the IP_Address instead of the couple (IP_Address, Interface).
Nonetheless, I thought it'd be possible to use the
null
IPAM driver to statically assign subnets and IP addresses with no validation whatsoever but it seems the macvlan / ipvlan drivers implicitly disallow its use. That's something we'd need to fix.
I would be very happy If I could bypass the check an assign the IP address manually. I will test it to be sure it doesn't work.
For now, unfortunately the workaround is to not use docker's networking features (ie.
--network=host
) and do it yourself.
@akerouanton thanks for your help unfortunately setting a static ip address without the subnet / ip_range give me the following error:
failed to create network XXXX: Error response from daemon: ipv4 pool is empty
Description
Hi,
My host computer have several interfaces.
I would like to create multiple macvlan network with the same subnet for each interfaces.
I'm doing something technically wrong or is it a Docker limitation?
Reproduce
docker network create -d macvlan --subnet=192.168.1.1/24 -o parent=eno1 network-1 docker network create -d macvlan --subnet=192.168.1.1/24 -o parent=eno2 network-2 Error response from daemon: Pool overlaps with other one on this address space
Expected behavior
This should be acceptable
docker version
docker info
Additional Info
No response