Closed techtonik closed 7 years ago
More information about target system.
$ ls -la /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
$ ls -la /etc/network/interfaces.d/
total 8
drwxr-xr-x 2 root root 4096 Apr 16 2015 .
drwxr-xr-x 7 root root 4096 Aug 20 00:42 ..
$ ip addr
1: lo: ...
2: eth0: ...
3. lxcbr0: ...
4. vethKWL1L8: ...
I have no idea what vethKWL1L8
is and why /etc/network/interfaces
is empty.
Step 1: Create a bridge on your host, follow your distribution guide for the same. Here is an example configuration from my machine. I'm using Ubuntu 15.10.
sudo vim /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto br0
iface br0 inet static
address 172.31.31.35
netmask 255.255.255.0
gateway 172.31.31.2
dns-nameservers 8.8.8.8 8.8.4.4
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
Step 2: Create a new profile or you can edit the default profile.
lxc profile create bridged
Step 3: Edit the profile and add your bridge to it.
lxc profile edit bridged
name: bridged
config: {}
devices:
eth0:
nictype: bridged
parent: br0
type: nic
Step 4: While launching new containers you can use this profile or you can apply it to an existing container.
lxc launch trusty -p bridged newcontainer
or
lxc profile apply containername bridged
Restart the container if your applying it to an existing container.
Step 5: You'll need to assign static ip to your container if you don't have dhcp in your network.
Step 1: Create a bridge on your host...
My /etc/network/interfaces
is empty, but I already have eth0
and lxcbr0
configured. Where does this happen?
What are other configuration differences between my current lxcbr0
and proposed br0
?
The address for host eth0
is handled dynamically by local DHCP server and I want the same for guest.
Step 3: Edit the profile and add your bridge to it.
This changes the meaning for eth0
on guest and I need new interface eth1
on guest that is LAN attached. I edited the issue to mean that I have DHCP running in the network.
Note also that host eth0
is the NAT interface already for lxcbr0
(if I understand correctly host eth0
is already bridged to lxcbr0
) and it should also be LAN interface.
On Tue, Nov 24, 2015 at 01:45:57AM -0800, anatoly techtonik wrote:
Step 1: Create a bridge on your host...
My
/etc/network/interfaces
is empty, but I already haveeth0
andlxcbr0
configured. Where does this happen?
Is there /etc/network/interfaces.d/eth0? is network-manager running?
lxcbr0 is created by the init job 'lxc-net' (either /etc/init/lxc-net.conf or /lib/systemd/system/lxc-net.service)
Is there /etc/network/interfaces.d/eth0?
No. Is there /etc/network/interfaces.d
is empty.
is network-manager running?
Probably, because host internet connection is up. How to check?
lxcbr0 is created by the init job 'lxc-net' (either /etc/init/lxc-net.conf or /lib/systemd/system/lxc-net.service)
I see reference to lxc-net start
but I don't see where is the configuration for lxcbr0
.
@stgraber https://linuxcontainers.org/lxd/news/#lxd-024-release-announcement-8th-of-december-2015 says we now have macvlan available.
Wouldn't this solve this issue?
Probably. Still need to figure out how to use it. My use case:
@stgraber Is there some documentation on the new macvlan functionality?
Well, the various fields are documented in specs/configuration.md
It's basically:
type=nic
nictype=macvlan
parent=eth0
Note that it cannot work with WiFi networks (which is why we've never made it the default in LXC or LXD) and similarly cannot be used on links that do per-MAC 802.1X authentication.
Note that it cannot work with WiFi networks (which is why we've never made it the default in LXC or LXD) and similarly cannot be used on links that do per-MAC 802.1X authentication.
Thanks a lot. That clarifies it for me.
@techtonik Considering all pieces, I would still go with the routing (DHCP) solution.
Yep. It will be a pain if it doesn't work through WiFi.
Seems like this issue is settled, @techtonik ?
@srkunze, not really. So far I see no clear recipe in this thread. The answer needs to be summarized, ideally with some pictures.
The answer needs to be summarized, ideally with some pictures.
I have no idea how to do this for all types of routers. UI changes too quickly and all routers/DHCP servers can be configured differently.
@stgraber Maybe, there is another even easier solution?
@srkunze summarizing up to a point where it is clear why you need a router and where is sufficient for now. But note that there are three possible cases:
I actually think about 3rd variant - why don't use already opened channel to fetch traffic to and from running container? With netcat, for example.
I would like to pinch in because every time I tried to use LXC/LXD I encountered the problem without a clear and "simple" solution.
@techtonik What's wrong with plain old routing? At least it solves this issue: accessing a container from the LAN. I don't see much use of port forwarding right now. :)
@Annakan Don't you think this is the other way round? This issue here is about how to access a container FROM the LAN. Given the routing of the LAN is properly configured that just works.
Thanks for your answer
A computer crash made me loose my long answer, you will thus be spared it ;)
I don't think it is the other way round since that means you have to manage on the host something that concerns the container. You can't use a container without doing at least some port or IP mapping and that's something you have to do with the IP of the container. Thus, you have to retrieve that IP and expose it on the host, a sure sign that it is something that should be managed by the container manager and not manually on the host.
Or else, you have to keep tabs manually on the host of the rules you create for the container. You have to update, delete them and that means you have to create complicated mechanisms to keep them in sync.
Container migration is also complicated because you have to find a way to reapply the rules on the target host. On the other hand if the container profiles contains the network model (like : I use my host DHCP or I expose port X and Y to my host or through my host (different situations)) then it is simple to migrate them , activate them, or shut them down.
Ipchains, as far as I know does not offer a way to tag or group rules making this even more complicated and relying on the IP of the containers and the "exact identity" of rules to manage them. It is, honestly a mess of a packet filter language.
Besides as far I was able to see, the official documentation does not offer a template of such rules, and the ones I googgled seemed really awkward with strange uses of "nat", but I confess I am not an ipchain expert, they did not work for me in a reliable way.
Isolated containers and complex service discovery and transfer of rules, total independence from the file-system and automatic orchestration are a fine theoretical nirvana but it does concerns only 0.001 % of the people and companies out there, the ones who dynamically spans 1000 of containers across multiple data-centers. This is the use case of docker and it is a very narrow target, and LXC/D as a true card to play by being able to scale from "thin VM"' that can be spawned by code , to "by the book immutable container", and offer a path for companies to go from one point to the other.
But it starts by being able to spawn a LX container and have it grab an IP from the host DHCP
[edit for clarity : the same DHCP
as the host, or the available DHCP
] and be useful right away.
Then one can add, configuration management (SALT/Puppet etc), dynamic configuration (consul, zookeeper) and then evaluate the cost of abstracting the filesystem and database and making those containers immutable and idempotent) Docker is the religion of the immutable container, LXC/D can offer something much more flexible and address a much broader market.
I really think that pass by being able to write :
lxc remote add images images.linuxcontainers.org
lxc launch images:centos/7/amd64 centos -p AnyDHCPAvaliableToHostNetworkProfile
And get a container that is reachable from the network. Simple, useful, and immediately rewarding.
That's quite some explanation. Thanks :-)
So, the argument goes that in order to do the "routing config" step, one needs to know the container's IP in the first place. Quite true. Manually doable but automatically would be better.
Which brings me to my next question: the to-be-configured DHCP server does not run on the host necessarily but on another network-centric host. How should LXD authenticate there to add routes?
Yes, I would make it even more precise saying that only the container know its purpose and thus the connectivity and ports he needs to expose, so however you see it, providing it with the resources (port mapping, ip) is something you need to query it to achieve, and that might be problematic if it is not yet running. Better make that a part of its definition, have the environment set up as automatically as possible from there and my understanding is that's what profiles are for, make the junction between launch time and run time.
As for the last part of your answer I suspect we have misunderstanding (unless you talk about the last, 4th, case of my long answer who is more an open thinking than the first 2).
My "only" wish is either/both
1.To have a a way to make port mapping and routing a part of the container (either its definition, or a launch time value or a profile definition, I suspect a launch/definition time value would be best). And have run/launch take care of firewall rules and bridge configuration)
dhcpoffer
to the container and let the dhclient
in it take it from here.The various answers in this thead (from @IshwarKanse , through routing, and @stgraber , through macvalan) are supposed to give just that, except I (and the OP it seems) were not able to get them working manually, and I wish they could be automatically set up by either a profile or a launch configuration.
Unless you are talking about DHCP security through option 82 ?
PS : I edited my previous post to clear things up
I think I got it now. :)
Well, that's something for @stgraber to decide. :)
@Annakan did you try using macvlan with the parent set to the host interface?
Oh, I see you mentioned it earlier. macvlan should do basically what you want, the one catch though is that your container can't talk to the host then, so if your host is the dhcp server, that'd be a problem.
Thanks for the answers I did exactly this :
> Stop OpenResty
> lxc profile edit mvlan
type=nic
nictype=macvlan
parent=eth0
> lxc profile apply OpenResty mvlan
> Start OpenResty
lxc profile edit brwan
gives exactly this
###
### Note that the name is shown but cannot be changed
name: brwan
config: {}
devices:
eth0:
nictype: macvlan
parent: eth0
type: nic
Container startup fails
lxc info --show-log OpenResty
Yields : ` lxc 20160226161814.349 INFO lxc_seccomp - seccomp.c:parse_config_v2:449 - Adding compat rule for delete_module action 327681 lxc 20160226161814.349 INFO lxc_seccomp - seccomp.c:parse_config_v2:456 - Merging in the compat seccomp ctx into the main one lxc 20160226161814.349 INFO lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/bin/lxd callhook /var/lib/lxd 4 start' for container 'OpenResty', config section 'lxc' lxc 20160226161814.349 INFO lxc_start - start.c:lxc_check_inherited:247 - closed inherited fd 3 lxc 20160226161814.349 INFO lxc_start - start.c:lxc_check_inherited:247 - closed inherited fd 8 lxc 20160226161814.360 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:178 - using monitor sock name lxc/d78a9d7e97b4b375//var/lib/lxd/containers lxc 20160226161814.375 DEBUG lxc_start - start.c:setup_signal_fd:285 - sigchild handler set lxc 20160226161814.375 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226161814.375 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226161814.375 DEBUG lxc_console - console.c:lxc_console_peer_default:524 - no console peer lxc 20160226161814.375 INFO lxc_start - start.c:lxc_init:484 - 'OpenResty' is initialized lxc 20160226161814.376 DEBUG lxc_start - start.c:lxc_start:1247 - Not dropping cap_sys_boot or watching utmp lxc 20160226161814.377 INFO lxc_start - start.c:resolve_clone_flags:944 - Cloning a new user namespace lxc 20160226161814.399 ERROR lxc_conf - conf.c:instantiate_veth:2590 - failed to attach 'veth2FKB5C' to the bridge 'brwan': Operation not permitted lxc 20160226161814.414 ERROR lxc_conf - conf.c:lxc_create_network:2867 - failed to create netdev lxc 20160226161814.414 ERROR lxc_start - start.c:lxc_spawn:1011 - failed to create the network lxc 20160226161814.414 ERROR lxc_start - start.c:lxc_start:1274 - failed to spawn 'OpenResty' lxc 20160226161814.414 INFO lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/share/lxcfs/lxc.reboot.hook' for container 'OpenResty', config section 'lxc' lxc 20160226161814.918 INFO lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/bin/lxd callhook /var/lib/lxd 4 stop' for container 'OpenResty', config section 'lxc' lxc 20160226161814.993 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_init_pid failed to receive response lxc 20160226161814.993 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_init_pid failed to receive response lxc 20160226161814.994 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161814.994 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161815.001 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226161815.001 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226161815.003 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161858.875 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226161858.875 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226161858.883 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161858.887 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226161858.887 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226161858.889 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161858.897 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161922.688 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226161922.688 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226161922.690 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161922.694 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226161922.694 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226161922.696 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161922.697 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161932.011 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226161932.011 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226161932.013 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161932.016 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226161932.016 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226161932.025 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226161932.027 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226165637.738 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226165637.738 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226165637.747 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226165637.751 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536 lxc 20160226165637.751 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536 lxc 20160226165637.759 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 20160226165637.761 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
It seems that LXD try to link the
macvlan to a brigde named after the profile name (
brwan) and not to the host (
eth0 in my case) interface, unless the error message is misleading. or is it that I need to create a separate bridge named after the profile to receive the virtual interfaces ? (but then I will need to remove the
eth0 host interface from the
lxcbr0 `bridge right ? and thus loose other container connectivity ?)
Can you paste "lxc config show --expanded OpenResty"?
I assumed you meant the "show" subcommand
lxc config show --expanded OpenResty
name: OpenResty
profiles:
- brwan
config:
volatile.base_image: 4dfde108d4e03643816ce2b649799dd3642565ca81a147c9153ca34c151b42ea
volatile.eth0.hwaddr: 00:16:3e:8a:3a:e1
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":310000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":310000,"Nsid":0,"Maprange":65536}]'
devices:
eth0:
name: eth0
nictype: bridged
parent: brwan
type: nic
root:
path: /
type: disk
ephemeral: false
hum .. parent: brwan ?
ok, what about "lxc config show OpenResty" (no expanded)?
I might have got it, I used the same container experimenting with @IshwarKanse solution and at that point I tried to setup a secondary bridge (hence the brwan name of the profile).
I suspect some previous profile configuration are lingering on. Or some dependency I don't understand yet. I shall try with a completely fresh container and I should not have reused my previous one
right ?
Yeah, my guess is that you have local network configuration set on that container with the same device name of "eth0". Container specific config takes precedence over whatever came from profiles, so your change to the profile was effectively ignored.
@stgraber How come that "macvlan should do basically what you want, the one catch though is that your container can't talk to the host then"?
I tried it with a new container
lxc launch images:centos/7/amd64 centosmacvlan
lxc stop centosmacvlan
lxc profile apply macvlan centosmacvlan
lxc start centosmacvlan
with
lxc profile edit macvlan
being
### Note that the name is shown but cannot be changed
name: macvlan
config: {}
devices:
eth0:
nictype: macvlan
parent: eth0
type: nic
gives
[root@centosmacvlan ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
9: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:16:3e:14:05:90 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.14/24 brd 192.168.0.255 scope global dynamic eth0
valid_lft 86372sec preferred_lft 86372sec
inet6 fe80::216:3eff:fe14:590/64 scope link
valid_lft forever preferred_lft forever
so it works _SORRY _for the trouble.
I don't understand precisely your sentence :
you have local network configuration set on that container with the same device name of "eth0"
When you say "local network" you mean in the container ? the fact that both inside and outside nic are named eth0 ?
@srkunze it's an odd property of macvlan, macvlan can talk to the outside and between themselves but cannot talk to the parent device, so the host in this case.
@Annakan What I mean is that if you look at "lxc config show CONTAINER_NAME", you'll most likely find a "eth0" device listed there. LXD when building a container configuration applies all the profiles first (in the order they were specified) and then applies the local container configuration, so if you have profiles with "eth0" as a device name and your container does too in its local config, then the container's entry will override whatever came from the profiles.
In other words:
lxc profile create blah
lxc profile device add blah eth0 nic nictype=bridged parent=lxcbr0
lxc init ubuntu my-container -p blah
At that point, the container has a eth0 device coming from its profile which is a bridged interface.
lxc config device add blah eth0 nic nictype=macvlan parent=eth0
After that, the container ignores the eth0 device coming from its profile and instead uses a macvlan interface.
@stgraber Great. -.- So, what's the solution here? A custom wrapper deciding whether to talk to the host directly or use macvlan? Or is there a standard solution to this?
FWIW my simplistic SOHO non-enterprise approach to exposing containers to my LAN and/or public IPs is to disable the default lxcbr0 setup (10.0.3.0) and create my own lxcbr0 bridge on the host with my LAN IP (192.168.0.0). I then install the standard dnsmasq
package on my Kubuntu host, disable DHCP on my LAN router, and use a single /etc/dnsmasq.conf
config file to manage all DHCP and DNS queries for all my containers and other hosts on my LAN.
I've been using LXD in production for 5 months now. I work as System and Network Administrator at University of Mumbai. Many services in our University like OpenNMS for network monitoring, OTRS for helpdesk, Owncloud for file sharing, websites of many departments, Zabbix for server monitoring etc are running in LXD containers. As mentioned in my previous post, I've created a bridge br0 on all the container hosts and created a profile called bridged which tells LXD to use the br0 bridge when launching containers. For those of you who have worked with KVM are familier with bridges. When launching new containers, these containers get IP address from the DHCP in our network but I usually change it to a static address. Migration is also easy, just move the container to another host and it works, no need to change anything on the host.
One of my friends recently joined a company called RKSV as a Senior Solutions Architect. It is a financial company. They provide online trading platform. One of the applications they were working on is Upstox, it is a mobile app to buy and sell stocks. Being a financial app it pulls in lots of stock data in real time, They needed a platform which provided high performance to deploy their application backend with. My friend started testing the application in various virtualization platforms. He used Xenserver which was not working for the application. Some of the developers were fans of Docker, they insisted running their application with it. Running a application on your laptop using Docker is easy but running it in production is a different thing. You need to think about logging, monitoring, high availability, multi host network and all that stuff. Traditional solutions doesn't really work. You need services that are developed to work with Docker. In the last company that I worked for I was working with deployment of various solutions with Docker. It was fun working with small multi container applications. But for large complicated apps, we usually preferred KVM. In my friend's case their application was experiencing very high latency in the network performance with Docker. They use multicast in their network but they couldn’t get the containers to work with multicast. I suggested my friend to try LXD for their application We used Ubuntu 14.04 for the base machines and Ubuntu 14.04 as container guests. Deployed the application with all its dependency services nodejs, mongodb, redis etc. Used Netflix Vector and other tools to test the performance. The performance was really great, we basically were running the application on bare metal. We used the bridged method. Multicast was working inside the containers. Traditional monitoring and logging solutions were working. They later switched their QA to LXD. After a month of testing they were using LXD in production. They had few problems with their Mongo instances, but switching them to macvlan solved the problem. They are now building images with Ansible. Using Jenkins + Ansible for automated deployment. Using Gitlab for source code management. Everything running with LXD.
Whenever I meet my friend I always ask how is LXD working for them. He says they never had a problem. Whenever a new LXD version is released, we test it for few days and then later do the upgrade for our machines.
Thanks @Annakan for taking it from where I left it. I especially like this part:
(2) [a "thin VM" available on the host netword provided a DHCP or IP range is available] would completely change the "first hour experience" of LXD.
So it looks like people in this thread have found the right recipe (even two) that work as a solution. Now the only thing that is left is to somehow make a short instruction targeted for folks like web component designers who never had to deal with network stack deeper than HTTP connections.
I am willing to draft such a short how to. I just would love to completely document this subject with
I would then write a "how to" document to cover all those options for a person starting with LXC/D either coming from docker or not. I'll come back later to investigate @IshwarKanse solution and try to have it work too.
The only way I'm aware of to overcome the macvlan issue is by having the host itself use a macvlan interface for its IP, leaving the parent device (e.g. eth0) without any IP.
Sounds suboptimal. At least regarding the convenience all other participants of this thread strive to accomplish.
There's only so much we can do with the kernel interfaces being offered to us, I'm sure there would be a lot of interested people if you were to develop a new mode for the macvlan kernel driver which acts like the bridge mode but without its one drawback.
@stgraber where is the bug tracker for this macvlan
? I wonder if somebody had already reported the issue?
The upstream kernel doesn't really have bug trackers, when someone has a fix to contribute, the patch is just sent to a mailing-list for review.
Well, they have https://bugzilla.kernel.org/ but it's not very actively looked at.
Ok I am trying now to understand the "simple bridged" solution to get containers on the general network the way @IshwarKanse tried to explain on top of this thread.
When I look at the "code/recipe" I see him build a new bridge with the eth0
host interface.
Unless I am mistaken that means the eth0
interface will leave the base lxcbr0
bridge am I right ? (Or can a physical interface be part of two different bridges, because when I tried this I lost connectivity on the eth0
interface to the outside world. But that might also be a route problem.)
He also assign an non routable IP address to the bridge but I suspect that IP class should be the same as the external dhcp
host.
Second he sets up a new profile that is technically identical to the default profile but point to the new bridge.
So the only difference I see is that this new bridge is not managed by the dnsmaq
daemon.
Is that a moderately good analysis or am I way of base ?
You could try something like this, assuming a recent ubuntu host, main router is 192.168.0.1 and this host will be 192.168.0.2, containers will be assigned 192.168.0.3 to 192.168.0.99...
. set USE_LXC_BRIDGE="false" in /etc/default/lxc-net . make sure /etc/lxc/lxc-usernet has something like "YOUR_USERNAME veth lxcbr0 10" . add this to /etc/network/interfaces (change IP and devices to match your needs)...
auto eth0
iface eth0 inet manual
auto lxcbr0
iface lxcbr0 inet static
address 192.168.0.2
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameserver 192.168.0.2
dns-search example.lan
bridge_ports eth0
bridge_stp off
. install the regular dnsmasq package . create a /etc/dnsmasq.conf file something like this (change example.lan to your domain)...
domain-needed
bogus-priv
no-resolv
no-poll
expand-hosts
log-queries
log-dhcp
cache-size=10000
no-negcache
local-ttl=60
log-async=10
dns-loop-detect
except-interface=eth0
listen-address=192.168.0.2
server=8.8.8.8
server=8.8.4.4
domain=example.lan
local=/example.lan/
host-record=gw.example.lan,192.168.0.1
host-record=host.example.lan,192.168.0.2
host-record=example.lan,192.168.0.3
host-record=c3.example.lan,192.168.0.3
host-record=c4.example.lan,192.168.0.4
host-record=c5.example.lan,192.168.0.5
ptr-record=example.lan,192.168.0.3
ptr-record=host.example.lan,192.168.0.2
mx-host=example.lan,example.lan,10
txt-record=example.lan,"v=spf1 mx -all"
cname=www.example.lan,example.lan
# DHCP
dhcp-range=192.168.0.3,192.168.0.99,255.255.255.0
dhcp-option=option:domain-search,example.lan
dhcp-option=3,192.168.0.1
dhcp-host=c3,192.168.0.3
dhcp-host=c4,192.168.0.4
dhcp-host=c5,192.168.0.5
You may have to disable the DHCP server on your main router so that there is only a single DHCP server on this 192.168.0.* network segment but that is okay because any other (perhaps wifi) hosts on this network will now also be allocated IPs from the above /etc/dnsmasq.conf file.
Now try lxc launch YOUR_IMAGE c3
(or c4, c5 etc) and the default image will now get an IP from your host dnsmasqs DHCP server according to whatever is mapped at the end of the above /etc/dnsmasq.conf.
Very simple, no multiple bridges, no iptables, no vlans, no fancy routing, no special container profiles.
The trickiest part is making sure the host only has "nameserver 192.168.0.2" in /etc/resolv.conf and any container or other DHCP client (if you want to take advantage of local DNS caching and customised example.lan resolution). In my case I set my first container (192.168.0.3) as the DMZ on my router so that example.lan is a real domain accessible from the outside world on the routers external IP as well as 192.168.0.3 internally.
Update: just to clarify the point @Annakan made about relying on the original DHCP server. My strategy here works fine with a remote DHCP server by commenting out below the DHCP section in /etc/dnsmasq.conf, which effectively removes DHCP functionality from dnsmasq, and getting IPs allocated from the remote/router DHCP server. I happen to use this particular method because I find it convenient to manage ALL DHCP and DNS requests via this single host and (because this dnsmasq servers DHCP/DNS ports are visible to the entire 192.168.0.0/24 network) it also works for all other devices I care to use, including wifi devices that connect directly to my router with the disabled DHCP server. Those devices get an IP from my hosts dnsmasq server because it's now the only one on the network and therefor gives me a simpler way to manage ALL DHCP/DNS for any machine on this LAN and any LXD container running on multiple LXD hosts as long as everything is on the same network segment.
Thanks a lot for your detailed contribution. Your solution is only doable in an environment where you can basically "take over" the network and you make your LXD/C machine the new DHCP server (might be more useful to reuse your existing DHCP server currently on the network, especially if you want to have more then one LXC/D container host, but then you would answer the need expressed on this thread exactly).
The goal on this thread is to be able to reuse a currently existing DHCP server on the network and make some containers full thin machines on the host network, without precluding the use of the host for isolated containers, and that means without destroying the base lxcbr0
bridge.
It is basically to be able to do with LXC/D what VMWare /VBox do natively by letting you chose a network configuration of bridge, NAT or host only \ per VM**.
The whole configuration should not rely on you having control of anything beyond the host because in most "enterprise" situation, one doesn't... And besides anything that relies on some LXC/D specific configuration beyond the host (on another machine than the host) has a poor chance to scale well (what you did on another machine or network component to accommodate your LXC/D host will either not work or have to be duplicated for another LXC/D host)
use the macvlan
option
benefit : make your VM stand on the host network as any other non container made machine of that network
Drawback : your host can't talk to the container, that might make automation difficult.
Ideally we want would like to be able to set up two bridges, one for "normal" containers who get a non-routable IP from the LXC/D managed dnsmasq
, the other a "transparent" bridge that would connect the containers designated "thin VM" directly on the host external network and get IPs from a DHCP server on the host external network.
But one nic can't belong to two bridges. So I see no way of doing this. Real Vlans ?
Can we configure the network inside some containers to get an IP on the host external network ? I don't see how for now since the lxcbr0 bridge sits on 10.0.3.x network.
Last we need to document the template firewall rules to expose port xxx in container "toto" on port yyy on the host, for the use case of isolated containers "à la" docker.
I'll be digging again in the ipchain documentation to achieve that.
Ideally I would love LXD to take care of that and implement the same port
and 'link' concept as docker.
Port (and links) are useful concept, volumes are much more dubious concepts the way docker defines them (dummy ref counted containers).
I use shorewall and pound to forward incoming http requests to the appropriate container. With shorewall, you can configure iptables to forward port 80 to a specific container (on port 80 or any other port).
In addition, I have a reverse proxy "pound" container (which could also run directly on the host itself, but why not put it in a container). Using shorewall, I've configured iptables to map incoming port 80 to pound:8080 pound looks at the Host: header and forwards the request to port 80 of the appropriate container. The container that has the web server listens to port 80, as usual. It should be configured to log the original ip address of the request, instead of the ip address of the pound container. I start with the two-interface configration for shorewall. I've used with both the lxcbr0 and lxdbr0 interfaces. I configure my containers with static ip addresses.
I also use shorewall to setup a separate external ssh port for each container. For example, I ssh to port 7011 of the host which is forwarded to port 22 of the container with internal ip 10.x.x.11. Here's the configuration line in /etc/shorewall/rules for this:
DNAT net lxc:10.17.92.11:22 tcp 7011
Hello,
Just started to use LXD and so far it's awesome. I was wondering if you could assign a second interface to all the containers. This interface would act has an internal lan only local to the host. Then you can combine this with the macvlan solution and you'd be able to :
On Wed, Aug 31, 2016 at 12:18:02PM -0700, Zero wrote:
Hello,
Just started to use LXD and so far it's awesome. I was wondering if you could assign a second interface to all the containers. This interface would act has an internal lan only local to the host. Then you can combine this with the macvlan solution and you'd be able to :
- Reach your containers on the same LAN where the host belong
- Reach your containers from inside the host with the internal LAN using the secondary interface on the containers
Sure, you can create a private bridge which doesn't have any outgoing nics attached, then add to the default lxd profile a second nic which is on that bridge.
Support you configured your LXD server for remote access and now can manage containers on remote machine. How do you actually run a web server on your container and access it from network?
First, let's say that your container is able to access the network already through
lxcbr0
interface created automatically on host by LXC. But this interface is allocated for NAT (which is for one way connections), so to be able to listen to incoming connections, you need to create another interface likelxcbr0
(called bridge) and link it to the network card (eth0
) where you want to listen for incoming stuff.So the final setup should be:
The target system is Ubuntu 15.10