Closed davidfavor closed 8 years ago
Seems like the most robust way to accomplish this is to...
1) convert host eth2 (base interface) from static to br0.
Change host /etc/network/interfaces + reboot (works).
2) configure container IP/Gateway
lxc config set template-yakkety raw.lxc 'lxc.network.0.ipv4 = 144.217.33.224' lxc config set template-yakkety raw.lxc 'lxc.network.0.ipv4.gateway = 149.56.27.254'
3) connect container IP link to host br0 which fails...
lxc config set template-yakkety raw.lxc 'lxc.network.0.ipv4.link = br0' error: Only interface-specific ipv4/ipv6 lxc.network keys are allowed
It appears the only outstanding issue to fix this is how to associate container IP address with host br0 interface.
Suggestions?
lxc network device add template-yakkety eth0 nic nictype=bridged parent=br0 name=eth0 cat | lxc config set template-yakkety raw.lxc - << EOF lxc.network.0.ipv4 = 144.217.33.224/24 lxc.network.0.ipv4.gateway = 149.56.27.254 EOF
Though, note that the preferred way to do this is through your Linux distribution's own configuration mechanism rather than pre-configure things through raw.lxc.
For Ubuntu, that'd be through some cloud-init configuration of some sort, that said, if raw.lxc works for you, that's fine too :)
I believe the first problem to be resolved is to destroy lxdbr0 + unfortunately dpkg-reconfigure no longer works... meaning the networking configuration stage never starts.
dpkg-reconfigure -p medium lxd Warning: Stopping lxd.service, but it can still be activated by: lxd.socket
Short of purging/reinstalling LXD completely.
I've removed all my containers.
Let me know how to reconfigure LXD to no longer use lxdbr0.
Thanks.
lxc profile edit default
lxc profile edit default == only changes a text file.
Still running...
dnsmasq -u root --strict-order --bind-interfaces --pid-file=/var/lib/lxd/networks/lxdbr0/dnsmasq.pid --except-interface=lo --interface=lxdbr0 --listen-address=10.119.167.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/lib/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/lib/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.119.167.2,10.119.167.254 --listen-address=fd42:88bd:c65f:d720::1 --dhcp-range fd42:88bd:c65f:d720::2,fd42:88bd:c65f:d720:ffff:ffff:ffff:ffff,ra-stateless,ra-names -s lxd -S /lxd/
Also ifconfig still shows lxdbr0...
service lxd/lxc-containers restart has no effect on dnsmasq or ifconfig listing.
Let me how to de-entangle dnsmasq from attempting to handle lxdbr0 + to delete lxdbr0 so it's completely gone... while leaving LXD intact/running.
Thanks.
Ah right, to destroy the bridge, you'll want "lxc network delete lxdbr0"
Just so the exact process is documented.
If you have currently defined containers + wish to remove the associated bridge interface...
1) You must first change all active profiles which reference the bridge. In my case, I'm only using the default profile... so...
lxc profile edit default -> change lxdbr0 to br0
2) Destroy lxdbr0
lxc network delete lxdbr0
At this point... the lxdbr0 interface is destroyed (pruned from ifconfig output + all networking tables) + also dnsmasq terminates.
Whew... stgrabber == my hero. 👍
Cool, sounds like you got it all sorted out, Closing
My version of lxc has no "lxc device add" syntax.
Let me know if this looks correct...
lxc network attach br0 yakkety eth0
Which looks to mean... attach the host's br0 to the yakkety container's eth0.
@davidfavor it's "lxc config device add"
And yes, your "lxc network attach" is fine, you need LXD 2.3 or higher for that, but I'm assuming that's what you have.
Using LXD-2.4.1.
Hum... After a issuing the above command, now...
lxc config get --verbose yakkety raw.lxc error: Failed to load raw.lxc
So can't get or set raw.lxc anymore.
@davidfavor "lxc config show yakkety"
net12 # lxc config show yakkety error: Failed to load raw.lxc
Ugh... inotifywait shows that raw.lxc seems to live in lxc.db so this data lives in sqlite3 land.
ah right, you wouldn't be able to show the container config either. It's a bug I fixed a few days ago, it shouldn't have let you set an invalid raw.lxc to begin with...
sudo sqlite3 /var/lib/lxd/lxd.db "SELECT value FROM containers_config WHERE key='raw.lxc';"
net12 # sqlite3 /var/lib/lxd/lxd.db .dump | grep raw INSERT INTO "containers_config" VALUES(310,2,'raw.lxc','lxc.network.0.ipv4 = 144.217.33.224/24\nlxc.network.0.ipv4.gateway = 149.56.27.254\n
net12 # sqlite3 /var/lib/lxd/lxd.db "SELECT value FROM containers_config WHERE key='raw.lxc';" lxc.network.0.ipv4 = 144.217.33.224/24\nlxc.network.0.ipv4.gateway = 149.56.27.254\n
Looks correct.
Let me know if I can just delete this line or if that will screw up something else.
ok, that looks fine, so long as the container has a network device attached to it, otherwise those two entries will be invalid since they apply to a network device that's not defined...
Does running:
lxc network attach-profile br0 default eth0
Fix the problem? That should add the br0 bridge to your container by adding it to the profile it depends on, which should avoid the error you've been getting so far.
net12 # lxc network attach-profile br0 default eth0 error: device already exists
Profile seems correct...
net12 # lxc profile show default name: default config: {} description: Default LXD profile devices: eth0: nictype: bridged parent: br0 type: nic usedby:
hmm, alright, well, lets just fix the DB then:
sudo sqlite3 /var/lib/lxd/lxd.db "DELETE FROM containers_config WHERE key='raw.lxc';"
Then paste the output of:
lxc config show yakkety --expanded
net12 # lxc config show yakkety --expanded
name: yakkety
profiles:
- default
config:
volatile.base_image: 687c1a6a81e8ce42114796f162b4b872e53c4cf5821d295f8a9eb1c0fe696389
volatile.eth0.hwaddr: 00:16:3e:36:00:18
volatile.eth0.name: eth0
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
volatile.root.hwaddr: 00:16:3e:d6:88:e0
volatile.root.name: eth1
devices:
eth0:
nictype: bridged
parent: br0
type: nic
root:
path: /
type: disk
ephemeral: false
Hum... If you know the Markdown to use to format this in pre tags, let me know.
Hmm, so that all looks correct.
What do you have in /var/log/lxd/yakkety/lxc.log* with a bit of luck one of the log files will tell you what the parser thought was wrong with your raw.lxc
My guess is that it may be upset about the gateway being outside of your IP's mask, hopefully it logs that kind of problem.
Okay...
After the sqlite3 deletion...
echo -e "lxc.network.0.ipv4 = 144.217.33.224\nlxc.network.0.ipv4.gateway = 149.56.27.254\n" | lxc config set yakkety raw.lxc -
lxc start yakkety -> works + IP assigned + IP isn't pingable.
Maybe I have to do some of the network attach magic again.
Maybe this?
lxc network attach-profile br0 default yakkety eth0
"ip -4 route show" and "ip -4 addr show" in the container would probably help figure out what's going on.
net12 # ip -4 route show default via 149.56.27.254 dev br0 onlink 149.56.27.0/24 dev br0 proto kernel scope link src 149.56.27.129
net12 # ip -4 addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 inet 149.56.27.129/24 brd 149.56.27.255 scope global br0 valid_lft forever preferred_lft forever
I think you missed the "in the container" part :)
Maybe this is the problem.
Seems like 28: eth0@if29 is wrong.
net12 # lxc exec yakkety -- ip -4 route show default via 149.56.27.254 dev eth0 149.56.27.254 dev eth0 scope link
net12 # lxc exec yakkety -- ip -4 addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 28: eth0@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link-netnsid 0 inet 144.217.33.224/0 brd 255.255.255.255 scope global eth0 valid_lft forever preferred_lft forever
Seems to match what you requested LXC to do
now the fact that there is no route from your host to the container's subnet explains why the ping wouldn't come back
"ip -4 route add 144.217.33.0/24 dev br0" on the host would probably fix connectivity between the host and container at least, no idea about external connectivity since that setup looks a bit odd to me.
Does the container's interfaces file look right to you?
net12 # lxc exec yakkety -- cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
Well, it contains what I'd expect to see in there. Now that's almost certainly wrong for your setup :)
That's what I was thinking... so... tell me what seems right to you for interfaces?
Anyway, your whole setup looks a bit dodgy to me and since it's at OVH and I also have a server there, I can tell you how I'm doing it, that should help :)
auto eth0:1
iface eth0:1 inet static
address 144.217.33.224
netmask 255.255.255.255
ip -4 route add 144.217.33.224 via 10.x.x.x dev lxdbr0
ip -4 route add 144.217.33.224/27 dev lxdbr0
That should get things to work the way you want. You can also then spawn as many containers as you want without that config file and they'll get connectivity on a private IP, push the file and reboot and they'll have a routed public IP too.
Above you say my setup looks odd.
I'm only attached to having this work + so whatever setup works is what I'm after, so...
Let me know how to un-oddify (word point) my setup.
Public IP range I'm working on for this machine is... 144.217.33.224/27 (32 contiguous IPs)
As for that /etc/network/interfaces, it's shipped by the distribution and won't change based on your LXC setting, so you can configure all you want through raw.lxc, unless you modify /etc/network/interfaces, Ubuntu will still try to do DHCP on it.
And yes. OVH is dodgy in general + their perks outweight their problems, so first off I'm going to go through your suggestions above + do exactly what you say.
Will take a few minutes.
So just to make sure I understand...
First step is to revert host /etc/network/interfaces, back to original OVH version, so my base interface is back to eth2, rather than br0 bridging to eth2. Yes?
yep
Host now shows...
net12 # ifconfig
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 149.56.27.129 netmask 255.255.255.0 broadcast 149.56.27.255
inet6 2607:5300:61:c81:: prefixlen 64 scopeid 0x0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
Since I already have containers... to do the following step....
Setup a normal independent LXD bridge with dnsmasq providing DHCP and DNS on it... seems like this is required...
lxc network create lxdbr0 lxc profile edit default -> change br0 to lxdbr0
Yes?
Yep, that should be fine
Great.
Now for next step - Let the container DHCP normally.
echo -e "" | lxc config set yakkety raw.lxc -
Yes?
lxc config unset yakkety raw.lxc
net12 # lxc start yakkety error: Missing parent 'br0' for nic 'eth0'
Because... of previous...
lxc network device add yakkety eth0 nic nictype=bridged parent=br0 name=eth0
What's the correct way to reverse this "device add"?
lxc config device remove yakkety eth0
Great. Now container starts/stops as expected.
Now for step - Push a file to /etc/network/interfaces.d/eth0-static.cfg containing...
On this machine, with connection to eth2 + eth3, rather than eth0 + eth1...
/etc/network/interfaces.d/eth2-static.cfg
auto eth2:1
iface eth2:1 inet static
address 144.217.33.224
netmask 255.255.255.255
Then issue - /etc/init.d/networking restart
Yes?
Nope, that file is meant to go INSIDE the container, not on the host.
Okay, moving to container...
The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
Required information
Issue description
Goal is to have LXD containers with static IPs which can communication with host + other containers.
Steps to reproduce
Simplest approach seems to be setting /etc/default/lxd-bridge LXD_CONFILE to a container,IP pairs + Ubuntu 16.10 seems to have removed this file.
I have 100s of LXC container,IP pairs to port to LXD + prefer a solution that avoids the old iptables nat rule approach.
None of the https://github.com/lxc/lxd/issues/2083 approaches seem to produce useful results.
The
echo -e "lxc.network.0.ipv4 = 144.217.33.224\nlxc.network.0.ipv4.gateway = 149.56.27.254\n" | lxc config set template-yakkety raw.lxc -
comes close, as my test container does end up with the correct IP assigned.
Maybe this is the correct approach, along with setting up the host base interface (eth2) in my case, to use br0, rather than eth2 + somehow bridging lxdbr0 to br0.
Suggestions appreciated, as all the Ubuntu docs seem wrong + the LXD 2.0 Introduction series seems to be missing basic networking examples for large scale LXD deployments.
Once I have a working approach, I'll publish all steps back here, so others can accomplish this easier.
Thanks.