larsks / blog.oddbit.com

3 stars 0 forks source link

post/2018-03-12-using-docker-macvlan-networks/ #3

Open utterances-bot opened 5 years ago

utterances-bot commented 5 years ago

Using Docker macvlan networks · The Odd Bit

A question that crops up regularly on #docker is “How do I attach a container directly to my local network?” One possible answer to that question is the macvlan network type, which lets you create “clones” of a physical interface on your host and use that to attach containers directly to your local network. For the most part it works great, but it does come with some minor caveats and limitations.

https://blog.oddbit.com/post/2018-03-12-using-docker-macvlan-networks/

v-marinkov commented 5 years ago

Thanks! I added your solution for the bridge to /etc/network/interfaces and now it runs on boot too!

kishor83 commented 5 years ago

I have Linux Debian PC with two Network interfaces: X1 & X2

Port X1 has IP : 132.186.90.12

Port X2 has IP : 172.17.0.111

I Created a bridge network to have its own network for conatainers using following command in this Debian linux PC

docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet Then i create container instances using following command

docker run --name container1 --rm -it -d --net=mynet mydockerimg a container get created with IP 192.168.0.2 and broadcasts some data over port 4840

I am able to ping "192.168.0.2" from X1 sucessfully.

but a ping "192.168.0.2" from X2 fails, what could be the reason ?

teamjetpoop commented 4 years ago

May I ask how this might be persisted using Ubuntu Server 18.04 LTS's netplan style of network configuration persistance? Does anyone know? ^___^

larsks commented 4 years ago

I suspect -- but I don't know for certain -- that you can't extend netplan like this using simple shell scripts. The solution described here uses a mechanism that has been deprecated on Red Hat-style systems as well, so this is mostly of historical interest.

kevindd992002 commented 4 years ago

@larsks

I followed this guide on my docker that is hosted on my Synology NAS and it solves exactly what is intended to solve, host-container communication when using the macvlan network driver in any of the docker containers.

There is one issue though, DLNA stopped working for the Synology NAS. My DLNA clients in my network cannot see the Synology media server anymore. Since the virtual macvlan interface in the host is bridged with the physical interface of the NAS, the physical clients on my network practically sees the NAS with two IP addresses. As soon as I remove the virtual macvlan interface in the host, everything works fine again.

What could be causing this? I hope you can help me with this. Thanks.

larsks commented 4 years ago

Kevin, thanks for reading! I'm glad this post helped out a little bit. I'm sorry to say I don't have an immediate suggestion w/r/t to your DLNA problem. If it's an issue of DLNA-related broadcasts going out the wrong interface, that's something that could maybe solved through additional routing entries or through iptables, but I don't know enough about how DLNA operates to suggest a solution.

kevindd992002 commented 4 years ago

No worries. I did a little bit of experiment and it looks like it got fixed when I changed the CIDR of the host macvlan to interface to be the same as my physical network (in this case, /24). This makes sense because if Synology somehow decides to broadcast out of that virtual macvlan interface and the CIDR is /32 according to your guide, it will not use the correct broadcast address.

If that makes sense to you too, a quick edit to your guide would be golden :)

Selmaks commented 4 years ago

Thanks Lars for the workaround. This has helped me and I guess a few others using docker on unraid. https://forums.unraid.net/topic/84229-dynamix-wireguard-vpn/?do=findComment&comment=807410

larsks commented 4 years ago

Thanks for the comment! I'm glad the article was helpful.

s-h-a-r-d commented 4 years ago

I use WireGuard on my host machine, which routes the traffic of my main interface through VPN server. What steps should be taken to route the traffic of macvlan containers through the VPN as well?

larsks commented 4 years ago

I'm not familiar with WireGuard. It sounds like you may be able to get what you want by setting a route inside your container that sets the default route to use the vpn. Note that in order to set routes in a container (or make any other changes to the network from inside the container), you will need at least the CAP_NET_ADMIN capability (so docker run --cap-add=NET_ADMIN ...).

You're probably better off asking someone who has worked with WireGuard and is familiar with how it sets up the host.

russtaylor commented 4 years ago

This was super helpful! I searched and searched, but this is the only article that got me up & running with having my containers using macvlan be able to contact their host machine.

Thanks so much!

larsks commented 4 years ago

@russtaylor I'm glad it helped!

fanaticDavid commented 4 years ago

Thank you so much for these clear instructions! I was able to allow communication between the macvlan'd containers and the host with the following commands:

ip link add macvlan-shim link eno1 type macvlan  mode bridge
ip addr add 192.168.100.63/32 dev macvlan-shim
ip link set macvlan-shim up
ip route add 192.168.100.56/29 dev macvlan-shim

Do I just copy/paste this in "/etc/network/interfaces" to make it persistent? Or how would I go about that?

kevindd992002 commented 4 years ago

No worries. I did a little bit of experiment and it looks like it got fixed when I changed the CIDR of the host macvlan to interface to be the same as my physical network (in this case, /24). This makes sense because if Synology somehow decides to broadcast out of that virtual macvlan interface and the CIDR is /32 according to your guide, it will not use the correct broadcast address.

If that makes sense to you too, a quick edit to your guide would be golden :)

@larsks

Did my workaround make sense to you too?

HavocW commented 4 years ago

It works fine, Thank you! Anyone know how to make it persistent?

kevindd992002 commented 4 years ago

@larsks Update please?

larsks commented 4 years ago

@kevindd992002 If you would like the right to be pushy about responses we can discuss a contract. Otherwise, this is entirely a volunteer effort that I engage in when I have both the free time and the inclination, both of which are now lacking.

Caaruzo commented 4 years ago

i there. Iam new to the whoe Docker thing and try to do the following and hope someone can help me pls.

Iam running a Debian 10.3 Server with docker and portainer. On this server i want to run different containers and make them reachable within my phisical network with own IP per container.

For example a teamspeakserver can have his own IP like 192.168.5.2

My current structure:

Network Range of physical: 192.168.0.0/16 Router IP: 192.168.178.1 Docker Host: 192.168.4.1

after i setup the container ive run the following to create a macvlan

docker network create -d macvlan --subnet=192.168.0.0/16 --ip-range=192.168.5.0/24 --gateway=192.168.178.1 --aux-address=“this-host=192.168.5.0” -o parent=eth0 chaos-intra

then i added the following to my /etc/network/interfaces:

up ip link add vlan-intra link eth0 type macvlan mode bridge up ip addr add 192.168.5.0/24 dev vlan-intra up ip link set vlan-intra up up ip route add 192.168.5.0/24 dev vlan-intra

in portainer i removed the bridge network and added the teamspeak container to the vlan-intra. also defined within network the vlan-intra and ip 192.168.5.1.

Normally i thougt that should work to make the teamspeak reachable via 192.168.5.1. From Docker Host iam able to ping the container. Iam also able to ping the docker host from my pc using 192.168.4.1 or 192.168.5.0. What i cant do is ping the container from my pc.

When i run the shell from teamspeak container in portainer i can ping my whole network. Now the funny thing. I start a cmd run “ping 192.168.5.2 -t” it always says its not reachable.

but when i ping my pc one time from within the teamspeak portainer console, then my pc reaches 192.168.5.2. But just until the docker host gets rebootet. Whats the cause her and how can i make it permanently reachable via 192.168.5.2?

thx in advance.

greetings caaruzo

HendrikHoetker commented 4 years ago

Tried to implement this on my machine with small adaptions in naming of the mtacvlan and addresses, got an error from the ip tool:

this works:

ip link add dockernet-host link eno1 type macvlan mode bridge

ip addr add 192.168.178.160/32 dev dockernet-host

ip link set dockernet-host up

here I got the following error:

ip route add 192.168.178.120/27 dev dockernet-host

RTNETLINK answers: Invalid argument

Any thoughts? The commands are executed on Debian: Linux files 4.19.0-0.bpo.8-amd64 #1 SMP Debian 4.19.98-1~bpo9+1 (2020-03-09) x86_64 GNU/Linux

larsks commented 4 years ago

@HendrikHoetker address 192.168.178.160 is not contained in the network 192.168.178.120/27. It looks like you may have some confusion about CIDR notation; for example, 192.168.178.120/27 doesn't make sense as a network. The address and prefix correspond to 192.168.178.96/27 which extends from 192.168.178.96 to 192.168.178.127.

HendrikHoetker commented 4 years ago

My fault, indeed. Better to read Wikipedia first. With correction to 192.168.178.128/26 it works now as expected. Forgot that the IP address needs to start at certain values which are implied by the subnet mask.

flayman commented 4 years ago

Hi. This got my over that last little hurdle with my Synology DiskStation. Thanks very much.

Matt-CyberGuy commented 4 years ago

Hey guys, I'm so close to getting this working, I just can't figuring out the routing line. I've got my containers able to talk to the network, but I can't get them to connect to the internet.

Also, I don't know if it matters, but the docker host is also my router/firewall. My network subnet is 10.0.0/24, I couldn't seem to get the below working within my network subnet, so I decided to do 192.0.0.0/24 instead, all of my 10.0.0.0 devices can see the below alpine instance, but it can't talk the the internet. Below is a script I wrote to set this tested.


docker network create -d macvlan \ --subnet=192.0.0.0/24 \ --gateway=192.0.0.1 \ -o macvlan_mode=bridge \ -o parent=eth0 \ CloudLAN

ip link add CloudLAN link eth0 type macvlan mode bridge ip addr add 192.0.0.1/24 dev CloudLAN ifconfig CloudLAN up ip route add 192.0.0.1 dev CloudLAN (I know the routing line isn't right, )don't know what to put for the ip/subnet)

docker run -itd \ --name Alpine\ --ip=192.0.0.3 \ --net=CloudLAN \ alpine /bin/sh

Ping Tests:

ping 10.0.0.100 PING 10.0.0.100 (10.0.0.100): 56 data bytes 64 bytes from 10.0.0.100: seq=0 ttl=127 time=0.407 ms 64 bytes from 10.0.0.100: seq=1 ttl=127 time=0.443 ms

--- 10.0.0.100 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.407/0.425/0.443 ms

/ # ping google.com PING google.com (172.217.4.174): 56 data bytes

--- google.com ping statistics --- 2 packets transmitted, 0 packets received, 100% packet loss

Here's a tcpdump from the host monitoring the 192 subnet: tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 12:04:08.038424 IP 192.0.0.3 > media-router-fp2.prod1.media.vip.bf1.yahoo.com: ICMP echo request, id 6400, seq 0, length 64 12:17:23.877258 IP 192.0.0.3 > lax28s01-in-f14.1e100.net: ICMP echo request, id 7168, seq 2, length 64 12:17:23.877335 IP 192.0.0.3 > lax28s01-in-f14.1e100.net: ICMP echo request, id 7168, seq 2, length 64 12:17:27.102316 ARP, Request who-has 192.0.0.1 tell 192.0.0.3, length 28 12:17:27.102370 ARP, Reply 192.0.0.1 is-at f6:45:5d:65:e0:08 (oui Unknown), length 28 12:17:34.638683 IP 192.0.0.3 > lax28s01-in-f14.1e100.net: ICMP echo request, id 7424, seq 0, length 64

I've been working at this non-stop for almost a week trying to figure this out. Any help would be greatly appreciated

shishos commented 4 years ago

Thank you so much for taking the time to write this. It was extremely helpful for me, especially after many hours of looking for help on Docker documentation unsuccessfully.

Now that the containers within the MACVLAN are automatically assigned by DHCP from the configured pool, how can I specify that a certain container will always be assigned with a certain IP address? Can I use --aux-address for this?

IE: one of the containers being assigned with DHCP is "edns". I want it to have a static IP address so my primary DNS server can always contact him for containers DNS queries.

HendrikHoetker commented 4 years ago

Just configure the static ip as described in the docker manual when you create the container. Pay attention that it is in the window of the subnet for the macvlan.

larsks commented 4 years ago

Now that the containers within the MACVLAN are automatically assigned by DHCP from the configured pool, how can I specify that a certain container will always be assigned with a certain IP address?

@shishos Are you actually using DHCP to assign addresses to your containers? The mechanism described in this article does not use DHCP for assigning addresses, and instead relies on Docker for address assignment. This means you can use the --ip argument to docker run to set a static address.

ztsmith commented 4 years ago

Thanks for the great article. I worked for me but it only allows access from host -> container, and not the other way around. Any ideas on why that might be?

ztsmith commented 4 years ago

I figured out what I was doing wrong. The containers need to access the host using the address you created the shim interface with. Networking is clearly not my strong suit!

larsks commented 4 years ago

@ztsmith glad you figured it out!

douglascacoal commented 4 years ago

how to configure 2 network with 2 network cards, eth0 and eth1 in the docker macvlan?

ERROR: Error response from daemon: failed to allocate gateway (10.1.1.1): Address already in use

lachlanhunt commented 4 years ago

I considered using this approach, but the fact that it doesn't persist through reboots and that I wasn't comfortable messing with such low level networking commands that I don't fully understand, I figured out an alternative solution.

For my use case, I needed to allow a docker container on a macvlan network to be able to access port 5000 on the host with HTTP. Specifically, I had Traefik proxy on a macvlan network so that it would work properly with both IPv4 and IPv6 exposing ports 80 and 443, running on a Synology NAS. Synology makes it very difficult to reuse host ports 80 and 443 for anything other than its own internal nginx proxy, hence why I needed to use macvlan. Synology exposes its web interface on ports 5000 (HTTP) and 5001 (HTTPS).

I needed Traefik to be able to proxy requests through to host port 5000, which it couldn't do after i moved it to the macvlan network.

I found tecnativa/tcp-proxy which I could run on a normal bridge network, and which would be able to communicate with the host. Traefik is able to communicate with other containers that are on the same bridge network, while also being on the macvlan network. I then just configured tcp-proxy to direct requests to it on port 80 through to 10.0.1.2:5000.

The relevant parts of my docker-compose.yml:

version: "3.6"
services:
  traefik:
    image: traefik:v2.2
    networks:
      vlan_home:
        ipv4_address: "10.0.1.16"
        ipv6_address: "${TRAEFIK_IPV6_ADDRESS}"
      traefik_proxy: {}
    ...
  home:
    image: tecnativa/tcp-proxy
    environment:
      LISTEN: ":80"
      TALK: "10.0.1.2:5000"
    networks:
      - traefik_proxy
    ...
networks:
  traefik_proxy:
    driver: bridge
  vlan_home:
    external: true # created with `docker network create --driver=macvlan ... vlan_home

Communicating with the host can then be done via the tcp-proxy. In my case, Traefik effectively requests http://home/, which docker maps to the tcp-proxy and then tcp-proxy forwads the request to the host on port 5000.

In theory, it would also work the other way. If you need to communicate from the host to a container on the macvlan network, then you could expose relevant ports on the tcp-proxy and configure it to forward to your other container as needed.

Finally, the other major advantage of this approach would be security. This approach only exposes the specific host ports that you need, which reduces the attack surface significantly.

shamoon commented 4 years ago

Interesting but only problem with the tcp proxy is you need a container for every port, etc.

Easy way to make this persistent is to just setup a script to run the above on startup eg. https://linuxconfig.org/how-to-run-script-on-startup-on-ubuntu-20-04-focal-fossa-server-desktop

Thanks for the awesome post!

lachlanhunt commented 4 years ago

@shamoon you wouldn't need a separate container for every port. It supports specifying any number of ports in LISTEN and corresponding servers in TALK environment variables. See the "Multi-proxy mode" section in the tecnativa/tcp-proxy docs.

meatalhead commented 4 years ago

Thank you for a really useful article. Has anyone found a tidy way to persist this on debian/ raspi? (Also, this is quite useful: https://access.redhat.com/sites/default/files/attachments/rh_ip_command_cheatsheet_1214_jcs_print.pdf)

agilenut commented 4 years ago

Great post. Looks exactly like what I need.

Trying to make this work on Synology.

(docker-compose.yaml) networks: home: driver: macvlan driver_opts: parent: eth1 ipam: config:

ip link add mynet-shim link eth1 type macvlan mode bridge ip addr add 192.168.2.95/32 dev mynet-shim ip link set mynet-shim up ip route add 192.168.2.90/29 dev mynet-shim

Last command errors: RTNETLINK answers: Invalid argument

Unlike poster above, I believe my CIDR notation is similar to this article. Any help appreciated.

agilenut commented 4 years ago

Great post. Looks exactly like what I need.

Trying to make this work on Synology.

(docker-compose.yaml) networks: home: driver: macvlan driver_opts: parent: eth1 ipam: config:

  • subnet: 192.168.2.0/24 gateway: 192.168.2.1 ip_range: 192.168.2.90/29 aux_addresses: host: 192.168.2.95

ip link add mynet-shim link eth1 type macvlan mode bridge ip addr add 192.168.2.95/32 dev mynet-shim ip link set mynet-shim up ip route add 192.168.2.90/29 dev mynet-shim

Last command errors: RTNETLINK answers: Invalid argument

Unlike poster above, I believe my CIDR notation is similar to this article. Any help appreciated.

I changed my route from 192.168.2.90/29 to 192.168.2.88/29. Should be the same range due to the masking but now it's using the first IP in the range. This made the command succeed but the host to container routing is still not working.

zilexa commented 4 years ago

Your guide did not work for me: router: 192.168.88.1 LAN network: 192.168.88.0/24 host (Ubuntu 20.04): 192.168.88.10

Docker macvlan:

networks:
  DNS-network:
    driver: macvlan
    driver_opts:
      parent: eno1
    ipam:
      config:
        - subnet: 192.168.88.0/24
          gateway: 192.168.88.1
          ip_range: 192.168.88.96/29
          aux_addresses:
            reserved: 192.168.88.96

I have 3 docker containers, their addresses will be: 192.168.88.98 (Unbound), 192.168.88.99 (PiHole), 192.168.88.100 (Unifi Controller).

Now I add the macvlan on the host. This does not work, the host cannot access any of the 3 docker containers:

sudo ip link add mynet-shim link eno1 type macvlan mode bridge
sudo ip addr add 192.168.88.96/32 dev mynet-shim
sudo ip link set mynet-shim up
sudo ip route add 192.168.88.96/29 dev mynet-shim

This, to my suprise, does not work, the host cannot access any of the 3 docker containers:

sudo ip link add mynet-shim link eno1 type macvlan mode bridge
sudo ip addr add 192.168.88.0/24 dev mynet-shim
sudo ip link set mynet-shim up
sudo ip route add 192.168.88.96/29 dev mynet-shim

This also does not work:

sudo ip link add mynet-shim link eno1 type macvlan mode bridge
sudo ip addr add 192.168.88.10/24 dev mynet-shim
sudo ip link set mynet-shim up
sudo ip route add 192.168.88.96/29 dev mynet-shim

This works to get access via the host machine, but if I connect remotely via PiVPN I cannot access those 3 docker containers:

sudo ip link add mynet-shim link eno1 type macvlan mode bridge
sudo ip addr add 192.168.88.10/24 dev mynet-shim
sudo ip link set mynet-shim up
sudo ip route add 192.168.88.98 dev mynet-shim
sudo ip route add 192.168.88.99 dev mynet-shim
sudo ip route add 192.168.88.100 dev mynet-shim

With every test I rebooted to ensure I had a clean slate. Help?

zilexa commented 4 years ago

I now have a solution that does not involve connecting the host IP to each docker IP individually:

sudo ip link add mynet-shim link eno1 type macvlan mode bridge
sudo ip addr add 192.168.88.96/32 dev mynet-shim
sudo ip link set mynet-shim up
sudo ip route add 192.168.88.96/29 dev mynet-shim

What is happening here: the addr add command takes the first IP in the range of the docker macvlan. Then the entire docker macvlan range is added as route.

Still: my issue is: I have installed PiVPN (Wireguard) directly on the host. When I remotely connect to PiVPN, I can access my LAN just fine, but not those 3 docker IP addresses :( Wireguard creates a network called wg0

Should I create a secondary macvlan, the same as above but now with wg0 ?

zilexa commented 4 years ago

Unfortunately that is not possible:

sudo ip link add mywg-shim link wg0 type macvlan mode bridge
RTNETLINK answers: Invalid argument
omgitsheaven commented 4 years ago

@larsks Thank you so much this guide, can finally sleep in peace. Does anyone know how to make the ip link rules persistent on debian?

larsks commented 4 years ago

@zilexa I'm sorry you're having problems! I'm not susprised that you can't create a macvlan bridge to your wg0 interface; wireguard doesn't implement any layer 2 (ethernet-level) networking features, while a macvlan interface is a layer 2 interface. I suspect there is a solution, but you've got a complex situation and I would probably have to reproduce it locally to figure one out. I'm not able to do that at the moment.

teamjetpoop commented 4 years ago

May I inquire as to whether it is possible to state

--aux-address 'host=192.168.1.222'

rather than

--aux-address 'host=192.168.1.223'

..since I was looking at Subnet Mask Cheat-sheet (https://dnsmadeeasy.com/support/subnet/) and it seems like .223 is actually a broadcast address. Perhaps .222 would be slightly more correct? Or maybe I am actually misunderstanding this? Please kindly enlighten!

teamjetpoop commented 4 years ago

By the way, thank you thank you for this write-up. It saved my bacon. I really needed my Docker container to commandeer another IP address on the same residential network because, quite simply, 2 daemons cannot both try to grab port 80 on the same stock Docker host, sometimes a little macvlan loving is exactly what's needed. Thank you!

teamjetpoop commented 4 years ago

Have a hypothetical question for you, say you got a web server on 192.168.1.193 (the first assignable IP address in the 192.168.1.192/27 subnet), and you would like this said web server to access a private mysql db, is there a way to do this via Docker Compose? So the idea is, the MariaDB server wouldn't have 192.168.1.194, it would just be private and only accessible to the web server. Is that possible? How would you phrase this in Docker-ism or Docker Compose-ism?

larsks commented 4 years ago

2 daemons cannot both try to grab port 80 on the same stock Docker host

Actually, they can, and I recommend that instead of using macvlan networks. The solution is to assign multiple addresses to your host interface. For example, we can assign both 192.168.1.20 and 192.168.1.30 as additional addresses on eth0:

ip addr add 192.168.1.20/24 dev eth0
ip addr add 192.168.1.30/24 dev eth0

And now we can start two Docker containers, each listening on port 80 on two different addreses:

docker run -p 192.168.1.20:80:80 --name server1 myimage
docker run -p 192.168.1.30:80:80 --name server2 myimage

Have a hypothetical question for you, say you got a web server on 192.168.1.193 (the first assignable IP address in the 192.168.1.192/27 subnet), and you would like this said web server to access a private mysql db, is there a way to do this via Docker Compose? So the idea is, the MariaDB server wouldn't have 192.168.1.194, it would just be private and only accessible to the web server. Is that possible?

If you follow the suggestion in ths first part of this comment, it's easy and no different from a typical use of docker compose:

version: '3'

services:
  myapp:
    image: myimage
    ports:
      - 192.168.1.20:80:80

  db:
    image: mysql

There you have an app available on the local network, and a MySQL container that is only available on the Docker network created by this compose.

101100 commented 4 years ago

Nevermind, I had a false negative. Communication from the container to the host and from the host to the container works with this configuration.


Original message:

I have followed this guide for use with OpenHAB with changes to match my ethernet device and home IP range. Thanks! My new container is able to access the network (I needed to be able to use UDP broadcasts). In addition, the host machine can access the container (so that Nginx can reverse proxy the service). However, the container cannot access the host machine. Is that possible? Here is the compose file and ip commands I ran:

docker-compose.yml:

version: "2.1"
services:
  openhab:
    container_name: openhab
    ...
    networks:
      openhab_vlan:
        ipv4_address: 192.168.2.225

networks:
  openhab_vlan:
    driver: macvlan
    driver_opts:
      parent: eno1
    ipam:
      driver: default
      config:
        - subnet: 192.168.2.0/24
          iprange: 192.168.2.224/28
          gateway: 192.168.2.2

ip commands executed:

sudo ip link add openhab-shim link eno1 type macvlan  mode bridge
sudo ip addr add 192.168.2.226/32 dev openhab-shim
sudo ip link set openhab-shim up
sudo ip route add 192.168.2.224/28 dev openhab-shim
jdeluyck commented 4 years ago

I've been trying to get this to work in an ipv6 context. Probably I'm missing some detail. The containers in question are reacheable from outside the host, the container can reach other things on the network too.

Macvlan interface:

550: macvlan-lan@vmbr0.134: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4a:ff:df:12:da:57 brd ff:ff:ff:ff:ff:ff
    inet 192.168.xx.254/32 scope global macvlan-lan
       valid_lft forever preferred_lft forever
    inet6 2001:yyyy:xxxx:zz:ffff:ffff:ffff:fffe/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::48ff:dfff:fe12:da57/64 scope link 
       valid_lft forever preferred_lft forever

macvlan network in docker:

 docker network create --ipv6 -d macvlan -o parent=vmbr0.134 \
    --gateway 192.168.xx.1 --subnet 192.168.xx.0/24 \
    --ip-range 192.168.xx.192/26 \
    --aux-address 'host=192.168.xx.254' \
    --gateway 2001:yyyy:xxxx:zz::1  --subnet 2001:yyy:xxxx:zz::/64 \
    --ip-range 2001:yyyy:xxxx:zz:ffff:ffff:ffff:ff80/121 \
    --aux-address 'hostv6=2001:yyyy:xxxx:zz:ffff:ffff:ffff:fffe' \
    macvlan-lan

Any idea what might be the missing piece?

jdeluyck commented 4 years ago

Forgot to add: routing is also in place on the host (and the container)

ip addr add 2001:yyyy:xxx:zz:ffff:ffff:ffff:fffe/128 dev macvlan-lan
ip route add 2001:yyyy:xxxx:zz::2/128 dev macvlan-lan
ip route add 2001:yyyy:xxxx:zz:ffff:ffff:ffff:ff80/121 dev macvlan-lan
imthenachoman commented 3 years ago

How do you go about exposing ports on a container on a different VLAN? Say I want to run a web container that runs on :8000 in the container and I want IP of my container on my VLAN:80 to go to the container?

If I was not using any containers I would do:

docker run -d --rm --name web-test -p 80:8000 crccheck/hello-world

And I can access IP of my server:80 from other devices.

With macvlan I am trying:

docker network create
    -d macvlan
    --subnet=192.168.50.0/24
    --gateway=192.168.50.1
    -o parent=eno1.50
    vlantest

docker run -it -d --rm
    --net=vlantest
    --ip=192.168.50.10
    --name test
    -p 80:8000
    crccheck/hello-world

But I can't access from 192.168.50.10:80.