Open dabernie opened 2 years ago
Agree, supporting multiple networks per Pod is still relevant. It was started a discussion about this topic before but I don't think we have created a best practice or use case for it.
Thanks @electrocucaracha I remember this one but forgot it never ended with some recommendations, call to arms or actions, lets revive it
Ok, I re-read the above mentioned thread and out of it I can extract 4 main threads
a) A lack of abstracted API structure to clearly define network configurations and attachments beyond standard CNIs.
b) There was a notion of post deployment interfaces in a POD, but if we go truly cloud-native with immutability, what would be the use case to create post deployment, on the fly network attachments. Normally, at creation an app owner already knows where it needs to attach its workload.
c) Exotic protocols, acceleration or specific device drivers would also be a trigger.
d) Because for ages (PNFs, VNFs, now CNFs) we have been used to a model.
b) There was a notion of post deployment interfaces in a POD, but if we go truly cloud-native with immutability, what would be the use case to create post deployment, on the fly network attachments. Normally, at creation an app owner already knows where it needs to attach its workload.
Regarding this point, if I understand correctly there are two different approaches:
Yes but what post on-boarding ? in a normal Kubernetes POD when do we change the networking requirements post deployment ? basic immutability would have you restart with the refreshed specs
I have created this draft for starting the discussion about this topic. All comments and feedback are welcome.
On hold as SIG Network are discussing how to standardise the API for all implementations of a multi-interface network plugin.
These are the efforts made by the Kubernetes community related to this topic:
I happened to see this ping earlier and I thought I'd add a comment or two to see if it helps. I was looking at the comments on dynamically changing the interfaces.
I want you to think about 'resources' (a loaded word) in two types:
One: a network address somewhere else in the network (not necessarily in k8s). I can produce a network address out of thin air at any time; any other service can hand an address and say 'here you go'. NSM is a bit like this, in that it can produce pipes that will spit and receive packets at any point in time. This kind of 'resource' is not immutable, obviously.
Two: a CNI network interface provided by k8s. One (originally) or a fixed number (Multus, DANM) are given to a pod, and it is not changeable without recreating the pod. This is arguably not because the CNI interface itself is immutable; it's because it's listed in the pod config, and that is immutable. Pods are made to be created and destroyed, not changed on the fly in this way, because if you can change a pod the whole question of orchestrating the parts of an app becomes 1000% more complicated.
To use cases, are there use cases that require giving a CNF a new interface it didn't originally have? Do we have them noted down? I don't recall this being a major concern with VNFs, so am I missing something or is this a new ask? If it's a new ask, is it high or low priority for CNFs and their users?
To technology, given CNF != pod, are there ways using container start/stop that a CNF could use to consume a new interface that it didn't have, even if a given existing pod cannot take on that network interface (that is, for instance, restarting a pod within the broader application in such a way that service is not disrupted)? Does it have so many shortcomings that it doesn't deliver on the use case?
The answers to those determine whether your packet-based interfaces required for CNFs are part of a pod's immutable config or dynamically requested and consumed by the application. The only thing I'll say to that is that it is possible to make these interfaces on demand and pass them around, even if it' isn't what we currently do, and you shouldn't rule out options that require it.
I believe it is long overdue to have a clear documented understanding behind the requirements for multiple interfaces within a cloud-native network function, independent from the fact that technical implementations already exist to support them.
Is multi-interface required for :
a) traffic segmentation b) isolation, security c) performance d) hardware dependencies e) because this is how we did PNF, VNFs f) all of the above
Understanding the real justification behind this will help better understand how CNFs and infrastructure might be evolving and potential simplification (ie avoiding too much toil).