Closed ramukima closed 6 years ago
If you will sr-iov cni plugin the NIC used by VM will be connected to a network without any interference from host OS, so there should be no problem with reaching to the broadcast.
Also running dhcp server on host with binding to it's all interfaces seems to be incorrect, as in this configuration dhcp will respond to queries from all accessible to host networks (which can have their own dhcp servers), instead of serving the data only to a private network pointed by you.
btw. did you tried this image with virtlet already? I can test it and provide more info, if you will share an url to this image.
@jellonek Thanks for your reply. I kind of agree with your reply about your points regarding running DHCP server with binding to all interfaces. However, SR-IOV as the only option for achieving what my VNF is trying to do on the LAN side will limit the deployments for my VNF.
The network function I am trying to run is a monolith with multiple functionalities embedded. It creates a VPN tunnel through the WAN side connected to it and lends DHCP addresses on the LAN side connected to it.
That looks like perfect case for multiple interfaces (TAL on https://github.com/Mirantis/virtlet/blob/master/docs/multiple-interfaces.md) with one perhaps configured for intra cluster communications (access to k8s services), one for wan connection (probably https://github.com/containernetworking/plugins/tree/master/plugins/main/host-device would be the best match, but it was not tested with virtlet yet), and another one for LAN (internal for workload communication).
What can be done for LAN (where dhcpd should be configured) depends on which l2 solution you are planning to use for overlay network, but simply any standard cni plugin which configures veth interface in container namespace - should work fine with virtlet. SR-IOV is a special (most optimal) case where we do not need additional bridge to add into our stack, but in case of "normal" cni plugins we are passing configuration data (addressing, routes) from them to vm using cloud init (with fall back to dhcp), removing this configuration from veth interface and joining it with VM interface using a bridge. So it's up to you how you will (re)configure networking in VM - all packages going from VM will be forwarded through configured by cni plugin device, regardless of it's actual IP configuration).
So all depends on:
Absolutely agree with "depends on which CNI in use". Ideally, I was looking to hook my VNF to two networks, one is a pure L2 bridge (LAN side) and one using existing network overlays e.g. Calico (WAN side).
When I connect my VM to these two networks, one being pure L2 and NO IPAM on it, Genie would complain of not able to parse an IPAM reply (may be because Genie does not support pure L2 bridge CNI plugin). As a result, my VM would not come up under virtlet.
Second, if I have the VM come up (just bridge with host-local/static CNI so that Genie is happy), one of the services started inside my VM is a DHCP server which is going to offer lease to anyone connecting to the bridge. i.e. Any other application (container/VM) be attached to that bridge to get an IP from the DHCP server running in my VM.
CNI mandates me to running the DHCP daemon plugin that in a way 'proxies' for a DHCP client. This proxy somehow was not able to talk to the DHCP server running inside my VM due to that broadcast message from DHCP CNI Daemon to the DHCP server was undergoing iptables processing. Disabling net.bridge.bridge-nf-call-iptables on the host works but is risky.
Anyways, for now, I do have a workaround. I was just wondering for VNFs that absolutely need access to the host network stack, is there a way to allow hostNetwork mode. Thanks for your prompt response.
CNI dhcp plugin is only for client side purpose (or you are speaking about other unknown to me plugin) and additionally it's quite limited (e.g. you can not pass to dhcp server info about who's making the query, by setting hostname option).
l2-bridge
plugin is WIP at the moment (i want to ensure if we are on the same page - you were trying https://github.com/containernetworking/plugins/pull/187, right?) and you are correct, libcni which is used almost everywhere where CNI plugins are used has an issue with lack of ip addresses in CNI Result
so there is ongoing https://github.com/containernetworking/cni/pull/578 for that which could be possibly merged during this week. That will require bump of cni lib version in genie and same in virtlet, so that may require a bit of additional time.
As for hostNetwork
mode - there is no easy way to share same stack state on 2 kernels, host and in vm, especially as there is no requirement for running a linux based operating system as a guest. The supposed workaround for that would be to have on both sides a set of daemons which would need to keep in sync list of opened/listened ports on both sides acquiring particular one which was just bound on peer. That would be veeeery tricky and race prone and so probably not so production ready. That's the reason why there is no virtualization based solution (or i'm not aware about such one) which would provide such option.
As you have already a workaround - i'm closing this issue.
When I connect my VM to these two networks, one being pure L2 and NO IPAM on it, Genie would complain of not able to parse an IPAM reply (may be because Genie does not support pure L2 bridge CNI plugin)`
If you can provide an example we may be able to adjust Genie for this use case.
At the moment a libcni needs to have fix for that merged as AFAIR genie also is using it for result parsing. Required on lib change is extremely small - https://github.com/containernetworking/cni/pull/578 but then probably also something additional will be needed to check on genie side.
The same situation is with loopback
plugin, so you can test/fix genie against it right now, without even looking on l2-bridge which is not yet merged.
As per the documentation at https://github.com/Mirantis/virtlet/blob/master/docs/networking.md, "Virtlet doesn't support hostNetwork pod setting because it cannot be implemented for VM in a meaningful way.".
However, I have a VNF image that offers DHCP services. How do I run such VNFs under virtlet ? The VNF will need to run in hostNetwork mode in that case, otherwise the DHCP requests never reach to the broadcast of the network that my VNF is serving DHCP on.
Reference: https://hunted.codes/articles/docker-containers-for-dhcpd-service-isolation/
What is recommended for such use cases ? I consider this as a basic use case where a router VNF provides DHCP functionality on a private network connected to it.