canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.35k stars 931 forks source link

LXD Static IP configuration - clear + working documentation seems scarce #2534

Closed davidfavor closed 8 years ago

davidfavor commented 8 years ago

The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.

Required information

Goal is to have LXD containers with static IPs which can communication with host + other containers.

Steps to reproduce

Simplest approach seems to be setting /etc/default/lxd-bridge LXD_CONFILE to a container,IP pairs + Ubuntu 16.10 seems to have removed this file.

I have 100s of LXC container,IP pairs to port to LXD + prefer a solution that avoids the old iptables nat rule approach.

None of the https://github.com/lxc/lxd/issues/2083 approaches seem to produce useful results.

The

echo -e "lxc.network.0.ipv4 = 144.217.33.224\nlxc.network.0.ipv4.gateway = 149.56.27.254\n" | lxc config set template-yakkety raw.lxc -

comes close, as my test container does end up with the correct IP assigned.

Maybe this is the correct approach, along with setting up the host base interface (eth2) in my case, to use br0, rather than eth2 + somehow bridging lxdbr0 to br0.

Suggestions appreciated, as all the Ubuntu docs seem wrong + the LXD 2.0 Introduction series seems to be missing basic networking examples for large scale LXD deployments.

Once I have a working approach, I'll publish all steps back here, so others can accomplish this easier.

Thanks.

stgraber commented 8 years ago

so write it somewhere on disk, then do "lxc file push /path/to/file yakkety/etc/network/interfaces.d/eth0-static.cfg", then "lxc restart yakkety" and the container should reboot and grab that IP

davidfavor commented 8 years ago

net12 # cat yakkety.net.config.txt

auto eth0:1
iface eth0:1 inet static
    address 144.217.33.224
    netmask 255.255.255.255

net12 # lxc file push yakkety.net.config.txt yakkety/etc/network/interfaces.d/eth0-static.cfg

net12 # lxc start yakkety

net12 # lxc list

+---------+---------+--------------------+---------------------------------------------+------------+-----------+
|  NAME   |  STATE  |        IPV4        |                    IPV6                     |    TYPE    | SNAPSHOTS |
+---------+---------+--------------------+---------------------------------------------+------------+-----------+
| yakkety | RUNNING | 10.0.119.85 (eth0) | fd42:9a2b:d62f:7e98:216:3eff:fe36:18 (eth0) | PERSISTENT | 0         |
+---------+---------+--------------------+---------------------------------------------+------------+-----------+
stgraber commented 8 years ago

Hmm, so that got ignored somehow, weird

davidfavor commented 8 years ago

Hum...

I think the container's /etc/network/interfaces has to have something to include /etc/network/interfaces.d/* files.

Let me know how you do this.

davidfavor commented 8 years ago

Change container's /etc/network/interfaces...

auto eth0 iface eth0 inet dhcp

to have last line of...

source /etc/network/interfaces.d/*.cfg

Sound right? Yes?

davidfavor commented 8 years ago

net12 # lxc restart yakkety

Now produces...

net12 # lxc list

+---------+---------+--------------------------------+---------------------------------------------+------------+-----------+
|  NAME   |  STATE  |              IPV4              |                    IPV6                     |    TYPE    | SNAPSHOTS |
+---------+---------+--------------------------------+---------------------------------------------+------------+-----------+
| yakkety | RUNNING | 10.0.119.85 (eth0)             | fd42:9a2b:d62f:7e98:216:3eff:fe36:18 (eth0) | PERSISTENT | 0         |
|         |         | 144.217.33.224 (eth0)          |                                             |            |           |
+---------+---------+--------------------------------+---------------------------------------------+------------+-----------+
davidfavor commented 8 years ago

Then...

ip -4 route add 144.217.33.0/24 dev lxdbr0

To make 144.217.33.0/27 addresses pingable...

stgraber commented 8 years ago

Oh right, I thought cloud-init would add that source statement automatically, not sure why you're missing it.

stgraber commented 8 years ago

Hmm, that ip route is a bit wrong, you want your /27 instead of that /24.

So:

ip -4 route add 144.217.33.224/27 dev lxdbr0
davidfavor commented 8 years ago

Whoa! Appears...

Host can ping container.

Container can ping host.

Container IP pingable from outside machine.

Geez! Might be working.

stgraber commented 8 years ago

(The /24 will obviously work, but you're also accidentally routing a bunch of IPs that don't belong to you :))

davidfavor commented 8 years ago

Got it... So for manual/static routes, there will be one static route/IP... of the form...

ip -4 route add 144.217.33.$addr/27 dev lxdbr0

Where $addr is each active IP address.

Yes?

stgraber commented 8 years ago

nope, just "ip -4 route add 144.217.33.0/27 dev lxdbr0" will cover your whole subnet, no need to do it per container.

davidfavor commented 8 years ago

Right .0 rather than .IP will get them all.

Got it.

Dude! You're a life saver!

I'm hosting 100s of high traffic client sites + like to start converting them all from LXC to LXD.

Thanks for your huge investment of time today.

After I roll all this info into a simple step-by-step guide, I'll drop the link here.

stgraber commented 8 years ago

Oh, btw, my command earlier was wrong, you want:

ip -4 route add 144.217.33.224/27 dev lxdbr0

Since .224 is the first address. If you use .0/27, then it will do from 144.217.33.0 to to 33.31 which is not what you want :)

davidfavor commented 8 years ago

Right... BASE-IP/27

Thanks.

davidfavor commented 8 years ago

Testing shows all packet flow working as expected.

One final question about runtime IPs.

For each container to support a static/public IP, each container will have two IPs. One internal + one external (static/public).

Let me know if this looks correct to you, based on your OVH setup.

Thanks.

net12 # lxc list

+---------+---------+--------------------------------+---------------------------------------------+------------+-----------+
|  NAME   |  STATE  |              IPV4              |                    IPV6                     |    TYPE    | SNAPSHOTS |
+---------+---------+--------------------------------+---------------------------------------------+------------+-----------+
| yakkety | RUNNING | 10.0.119.85 (eth0)             | fd42:9a2b:d62f:7e98:216:3eff:fe36:18 (eth0) | PERSISTENT | 0         |
|         |         | 144.217.33.224 (eth0)          |                                             |            |           |
+---------+---------+--------------------------------+---------------------------------------------+------------+-----------+
davidfavor commented 7 years ago

Looks like you already answered my question above... where you said...

Confirm it's got both a private (10.x.x.x) address and its public IP listed in "lxc list"

davidfavor commented 7 years ago

Problem source identified.

Request for best way to resolve.

At boot time, /etc/rc.local runs + executes the following command to route a public ip range to the LXD bridge interface.

ip -4 route add 144.217.33.224/27 dev lxdbr0

During host level upgrade of LXD somehow this additional route is lost.

Running the above command on the command line fixes problem.

Question is, what's the best way to associate the above command with lxdbr0 interface stops/restarts.

For physical interfaces, host level /etc/network/interfaces is where post-up commands are added.

Someone let me know the correct way to associate a post-up command with the LXD interface.

Thanks.

stgraber commented 7 years ago

Hmm, so I think the best way to deal with this would be through a new LXD network config key.

Until then, you could define a systemd unit which starts "After=lxd.service" and runs the command you need. That way, whenever the daemon is started/restarted, your command is run again.

spyderdyne commented 7 years ago

It seems like every 6 months you are invalidating your own documentation and not updating it afterward. I am constantly following instructions in your insights pages only to find important commands and config files completely missing. I will read through this to re-learn how to set up LXD on 16.10 with a bridge to the LAN for my controller node since the instructions I followed 2 weeks ago for 16.04 no longer work and the /etc/defaults/lxd-bridge file has been completely removed now.

My current state is Ubuntu 16.10 server with br0 bridge, static Ip assignment, MaaS rack controller running as local LXD instance, static controller ip assignment, static MaaS rack controller Ip assignment, lxd br0 bridge taking over the machine's bridge that is configured in /etc/network/interfaces to static class C RFC1918 and replacing it at boot time with dynamic class A RFC1918 addresses.

I am reserving the first 10 ip addresses in MaaS for core services (MaaS Rack and Region controllers, Juju bootstrap, Cloudify, and Network Devices) and providing a dynamic range for Openstack core services via the rack controller DHCP service on this LAN.

I should probably not need to have multiple LXD network config sets just to run the current version of Openstack without backports. I cant help but feel that this project would benefit from having more discipline as to what can be ripped out/replaced versus what should be immutable to prevent users from having to replace all their documentation and config automations every 6 months or so.

CarltonSemple commented 7 years ago

@stgraber Is there something I'm missing from https://github.com/lxc/lxd/issues/2534#issuecomment-255199890 ?

I verified that I can attach any IP from the subnet to my host using ip addr add 169.53.244.xx/27 broadcast 169.53.244.63 dev eth1, and then removed them.
I'm using LXD version 2.11, and followed the instructions at https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/ to create a network:

lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
lxc network attach-profile testbr0 default eth0

Then I push a file to the container's /etc/network/interfaces.d/eth0-static.cfg containing

auto eth0:1
iface eth0:1 inet static
    address 169.53.244.36
    netmask 255.255.255.255
    broadcast 169.53.244.63
    gateway 169.53.244.33

and then I add the route to the LXD bridge, after which I restart my container. ip -4 route add 169.53.244.32/27 dev testbr0

169.53.244.32 is the first IP in the subnet

stgraber commented 7 years ago

@CarltonSemple that should be okay, though note that the route may get flushed when LXD restarts (during upgrade or such). That's why I added ipv4.routes a few releases back.

CarltonSemple commented 7 years ago

@stgraber Ah, okay. I just followed https://github.com/lxc/lxd/issues/2701. However, I still can't reach my containers from outside of the host. Is there some ip addr add command as well?

stgraber commented 7 years ago

Nope, you should only have the route setup on the host, not have the address defined there.

A firewall could be blocking incoming packets for those IPs or you may have the traffic back out from those containers get NATed somehow breaking things.

Best way to debug those kind of issues is to run tcpdump inside the container and see what's making it there.

CarltonSemple commented 7 years ago

@stgraber I'm wondering if it could be the different infrastructure. I'm using Softlayer, with a "routed to VLAN" portable IP block

CarltonSemple commented 7 years ago

With a tcpdump inside the container, nothing appears to reach the container. I see the correct ICMP echo requests only when I ping the public IP from the host VM. I did see
16:39:54.584808 IP gateway > s236: ICMP time exceeded in-transit, length 68,
but it has only showed up twice.

Conjohnsonjohnson commented 7 years ago

@davidfavor "After I roll all this info into a simple step-by-step guide, I'll drop the link here."

Is this step-by-step guide available? I suggest this issue is still relevant. I'm having no success assigning an IP address of my own choosing to a container. I have several questions that I'm not able to answer for myself from this interesting discussion: 1) Why is dhcp required for this use case? The title is "LXD Static IP configuration". 2) What network configuration is required on the host? 3) What network configuration is required via the lxc command line? 4) What network configuration is required within the container?

I'm hoping to find answers to these questions in generic terms, not Ubuntu-centric, so the information is equally valid across the spectrum of linux hosts.

davidfavor commented 7 years ago

Conjohnsonjohnson wrote:

@davidfavor http://links.davidfavor.com/wf/click?upn=npdP0-2FMHcGNgMeleDP-2B5C31vwdBQ6x6yfJQ45N4fzrZ5kdjmPYI81a4Peedu3dTv_s5UH8bhnBaopYjthcNSRcaC8fv1GYJ42u9dW4Ymps68fQF2Bwu68aaBpwvP0gTDdFOPj1wREGu0weJoJOtBs0DojgwEcKC3qXEKGq3tJBtD-2FKlv05s-2FnWFtQ47rf1ZMG4XKLl5fVSOCAb96N8k89x5zCtrYw7RnR8a-2FQ1FES8v-2Br5N-2BCc82g1i0r2GexAWLjU1SW-2BEByFiKyW9RQEwvUpgWD2ZABtKnAycJ5lw9fZrmQZL3os-2BPj11XWskBUDeHstLk5uE47asDZwqvJC23ECwpeUPNbmLoBLgx4IC534Zr6jcNexYNJUgDKAiShTqnBLoiRXklfk-2BPXUOMc4j5Zew-3D-3D "After I roll all this info into a simple step-by-step guide, I'll drop the link here."

Is this step-by-step guide available? I suggest this issue is still relevant. I'm having no success assigning an IP address of my own choosing to a container. I have several questions that I'm not able to answer for myself from this interesting discussion:

  1. Why is dhcp required for this use case? The title is "LXD Static IP configuration".
  2. What network configuration is required on the host?
  3. What network configuration is required via the lxc command line?
  4. What network configuration is required within the container? I'm hoping to find answers to these questions in generic terms, not Ubuntu-centric, so the information is equally valid across the spectrum of linux hosts.

Rather than answering your questions individually...

http://links.davidfavor.com/wf/click?upn=npdP0-2FMHcGNgMeleDP-2B5CwDT8yGN7SPfWqbXTvyBZ-2Bok79XNVC27qhCKP5ZEg9iC_s5UH8bhnBaopYjthcNSRcaC8fv1GYJ42u9dW4Ymps68fQF2Bwu68aaBpwvP0gTDdFOPj1wREGu0weJoJOtBs0DojgwEcKC3qXEKGq3tJBtD-2FKlv05s-2FnWFtQ47rf1ZMG4XKLl5fVSOCAb96N8k89x5zCtrYw7RnR8a-2FQ1FES8v-2Br5N-2BCc82g1i0r2GexAWLjKqi-2BaArxsFXi9sgWiPPyKle67P4zKv-2B0nO0SDMi4DHp0QZiYdQxfELvfuI-2F3DfAethC-2B-2BXxVv8bByG-2BsU1fmrMGHLSYuILV22-2FW3kdfgsOUePkpD2ke5zPFHMitupNY42hJkm8dVCTH4CWoB3zs3SQ-3D-3D provides clear coverage of setting up bulletproof LXD container networking.

itisnotdone commented 7 years ago

http://cloudinit.readthedocs.io/en/latest/topics/network-config.html#network-configuration-sources If you prefer to work with cloud-init. If you also prefer to work with MAAS, you can create containers with cloud-init network configuration and reserve the IPs with its hostname using MAAS API so you can access them using domain(or FQDN) provided by MAAS.