Closed ryanmerolle closed 1 year ago
Picking a concise, descriptive, and vendor-neutral name for this may be tricky. I think I like "device context" best because it clearly references a device, though this would probably get confused with config contexts.
At any rate, this would probably entail adding a new model (we'll use DeviceContext for the purpose of discussion) which can be assigned to a device. Interfaces (and other components?) belonging to the device can optionally be assigned to a DeviceContext to indicate their membership. Deleting the DeviceContext instance would also remove these relationships, but the interfaces would remain on the parent device as before.
I'm not clear what other data we would need to track for the DeviceContext itself: Presumably a name, but what else goes into its configuration?
Couple thoughts on data to track:
I am sure there are more
These type of "device context" are normally hosted on cluster based either physical hardware or VM. So keeping track of the cluster they belong to would be nice.
Which product hosts the device context on a cluster? Granted my experience is with Nexus but the relationship is a One-to-Many Chassis-to-VDC
Most firewall vendors, such as listed above. Cisco, Palo Alto, Check Point, Juniper etc.
Cisco you can run Cisco ASA in context mode. Context are run on 2 physical cisco asa firewalls in a cluster. Then each Context has its own interfaces / routing table etc.
Check Point uses a product called VSX. VSX is based on appliances or openservers (HPE, IBM etc) These clusters can be up to 12 physical nodes.
Running on these you can have virtual routers / virtual systems / virtual switches. All of this virtual devices has independent routing tables / interfaces
Palo Alto and juniper dose it in a similar way.
So, for ASA, that is not how contexts work at all. I think you are confusing how HA works.
With a ASA, if you are using clustering with contexts, on each ASA you will have a context, not one context shared between ASAs. For Active/Active HA, or Active/Standby HA it is the same. If you have a "users" context, you will have a users context on both ASA's.
I am sure many of the other vendors are the same. You want a way to track the HA relationship and that is a separate FR (although virtual chassis works well in most.instances for that)
I think there are two distinct layers of abstraction here:
This FR addresses the later. The former is probably best conveyed using NetBox's virtual chassis model.
So, for ASA, that is not how contexts work at all. I think you are confusing how HA works.
With a ASA, if you are using clustering with contexts, on each ASA you will have a context, not one context shared between ASAs. For Active/Active HA, or Active/Standby HA it is the same.
I am sure many of the other vendors are the same. You want a way to track the HA relationship and that is a separate FR (although virtual chassis works well in most.instances for that)
Sure i understand that we want to have multiple virtual things running on the same device.
In Cisco ASA, yes you can enable context mode in single device or in a cluster. and yes for cisco its HA. "You can partition a single ASA into multiple virtual devices, known as security contexts. Each context acts as an independent device, with its own security policy, interfaces, and administrators. Multiple contexts are similar to having multiple standalone devices. For unsupported features in multiple context mode, see Guidelines for Multiple Context Mode." https://www.cisco.com/c/en/us/td/docs/security/asa/asa96/configuration/general/asa-96-general-config/ha-contexts.html
Check Point use VSX with VSLS meaning one context is active on one node within the cluster at the time
"Each Virtual System works as a Security Gateway, typically protecting a specified network. When packets arrive at the VSX Gateway, it sends traffic to the Virtual System protecting the destination network. The Virtual System inspects all traffic and allows or rejects it according to rules defined in the security policy.
In order to better understand how virtual networks work, it is important to compare physical network environments with their virtual (VSX) counterparts. While physical networks consist of many hardware components, VSX virtual networks reside on a single configurable VSX Gateway or cluster that defines and protects multiple independent networks, together with their virtual components."
Regarding virtual chassi, if am not misstaken this for things like Cisco VSS.
snowie-swe,
With ASA, when you have contexts enabled in a cluster or HA, you will have n contexts, 1 for each device. There is no need to track the "cluster", except in the case you want to track the HA state, which as I said is a separate FR.
Checkpoint VSX is the same way: https://sc1.checkpoint.com/documents/R80.10/WebAdminGuides/EN/CP_R80.10_VSX_AdminGuide/html_frameset.htm?topic=documents/R80.10/WebAdminGuides/EN/CP_R80.10_VSX_AdminGuide/161797
If you look at the image, you will see there is a virtual "context" associated with each physical device. There is no need to track this similar to the virtual machine model where you have a VM that can be on multiple separate devices. It is different, each device will hold a context.
snowie-swe,
With ASA, when you have contexts enabled in a cluster or HA, you will have n contexts, 1 for each device. There is no need to track the "cluster", except in the case you want to track the HA state, which as I said is a separate FR.
Checkpoint VSX is the same way: https://sc1.checkpoint.com/documents/R80.10/WebAdminGuides/EN/CP_R80.10_VSX_AdminGuide/html_frameset.htm?topic=documents/R80.10/WebAdminGuides/EN/CP_R80.10_VSX_AdminGuide/161797
If you look at the image, you will see there is a virtual "context" associated with each physical device. There is no need to track this similar to the virtual machine model where you have a VM that can be on multiple separate devices. It is different, each device will hold a context.
actually its not.
Just to clarify, this FR is about wanting to have virtualization similar to VM but for network equipment, correct?
"Emulating multiple virtual environments within a single device" All am saying is that there are cases where the thing will be a cluster that's its hosted on.
Just to clarify, this FR is about wanting to have virtualization similar to VM but for network equipment, correct?
Eh, kinda? In my experience (which is far from authoritative), a device context is more like a semi-isolated slice of a device to which physical interfaces are allocated. An example would be splitting a single physical router into two contexts that sit in front of and behind a firewall, effecting two entirely isolated forwarding planes.
In my mind I see this as distinct from "pure" virtual networking, where virtual routers need not be associated with any physical interfaces. Others might have a different take.
The usercases listed above is virtual firewalls atleast the Fortinet and Palo Alto.
Fortinet Virtual Domains (vDOMs) Juniper Virtual Router Instances Cisco Virtual Device Context Palo Alto Virtual Systems
and i added. Check Point Virtual System
The firewall cases the virtual device use the physical hardware for processing power. But they are allocated virtual ram, virtual cores, virtual interfaces, they can run diff functions. The work completely independent from each other and its host machine, similar to a VM. But same as the VM they use the physical interfaces either completely or a vlan on a physical interface
actually its not.
The image you shared exactly proves my point, there is at least 1 context on each physical device. It doesn't share the "VMWare model" where there is only a single virtual machine and it can start on any device. In contexts, on devices, there will always be at least 1 context per physical device, you will never have a context that runs on two different devices (however you may have a context that is part of a cluster that can be active on any one device)
Look at it this way. You do not have to have a cluster to use a virtual context. You can run a virtual context completely independent of whether you have a cluster or not in almost all cases.
If you want to track cluster members for HA purposes, that is a separate FR.
actually its not.
The image you shared exactly proves my point, there is at least 1 context on each physical device. It doesn't share the "VMWare model" where there is only a single virtual machine and it can start on any device. In contexts, on devices, there will always be at least 1 context per physical device, you will never have a context that runs on two different devices (however you may have a context that is part of a cluster that can be active on any one device)
Look at it this way. You do not have to have a cluster to use a virtual context. You can run a virtual context completely independent of whether you have a cluster or not in almost all cases.
If you want to track cluster members for HA purposes, that is a separate FR.
Just to clarify, i have no need to keep track of what where the context is active. that i can do with custom fields / config context. so me personally i just want to know that the "context" belongs to the cluster or a single device. i dont want to document the same "context" X amount of times just because it can run on a cluster with multiple members.
Also consider F5 vCMP. F5 also could be configured with Administrative Partition with Route Domain (VRF Lite) in combination that are more similar to a VDC.
Just to clarify, i have no need to keep track of what where the context is active. that i can do with custom fields / config context. so me personally i just want to know that the "context" belongs to the cluster or a single device. i dont want to document the same "context" X amount of times just because it can run on a cluster with multiple members.
As you have been told, you need to open a separate FR for this, as this is a HA model which is not specific to contexts (yes, it can be applied to contexts, but many vendors also let you HA the bare metal)
Which product hosts the device context on a cluster? Granted my experience is with Nexus but the relationship is a One-to-Many Chassis-to-VDC
It was said before, but Fortinet VDC (vDOMs) equivalent is built on a cluster if configured in a cluster or a single device if not.
In my experience (which is far from authoritative), a device context is more like a semi-isolated slice of a device to which physical interfaces are allocated. An example would be splitting a single physical router into two contexts that sit in front of and behind a firewall, effecting two entirely isolated forwarding planes.
My main experience is with Fortinet, I would tweak the above to be "semi-isolated slice of a device [virtual chassis or physical] to which physical interfaces are allocated. An example would be splitting a single physical router into two contexts that sit in front of and behind a firewall, effecting two entirely isolated forwarding planes."
It is fair to be said this cluster/virtual cluster association should be its own FR, but in my mind the FR should approach the common functionality across routers and firewalls a like.
A good conversation so far.
as part of this. It would be really nice if there was an option to convert existing vm's to virtual device contexts. Many people seem to have used vm's as a workaround for this. It would be nice to "move" an object instead of deleting and re-creating
as part of this. It would be really nice if there was an option to convert existing vm's to virtual device contexts. Many people seem to have used vm's as a workaround for this. It would be nice to "move" an object instead of deleting and re-creating
That one time migration would likely be good for https://github.com/netbox-community/migration-scripts
We discussed this FR in today's maintainers' meeting, and recognized that a more detailed implementation plan is needed.
One of the open questions is whether a particular interface (or other component) should be permitted to belong to multiple contexts. IMO this should not be allowed, as this is typically not permitted in my experience working with device contexts.
Separately, this raises the question of whether we should invest more thought into the modeling of virtual switches before undertaking this implementation, as there are numerous parallels.
One of the open questions is whether a particular interface (or other component) should be permitted to belong to multiple contexts. IMO this should not be allowed, as this is typically not permitted in my experience working with device contexts.
Cisco ASA for management purposes allow to use same physical interface in untagged mode into multiple device context. It is true that in other cases you normally use subinterface and associate them on specific context with 1:1 relation.
As a suggestion, perhaps we need to merge all capabilities on a single model and creates rules based on vendor on it, in order to guarantee a correct filling.
Also, we could consider that Cisco Nexus VDC feature (that is in Cisco decommissioning) could be threated as virtual switch so we could manage with a specific model, because it is true that this feature is abandoned but is also true that there are more switches in the world that uses this feature.
Punting this one for now as it doesn't seem that we'll be able to devise a suitably detailed model in time for the v3.3 beta.
I want to throw in a proposed model here:
VDC
Device = FK to Devices
VDC ID = BigInt
Name = Char
Interfaces = M2M w/ join custom table
Primary IPv4 = FK to IPAddress
Primary IPv6 = FK to IPAddress
Tenant = FK to Tenant
VDCResources
VDC = FK to VDC
Type = Static Modifiable Choice (CPU, HDD, MEM, TCAM, etc)
Value = BigInt (Normalized)
VDCInterfaces (Join table)
VDC = FK to VDC
Interface = FK to Interfaces
Shared = Boolean
Alternatively, we could denormalize the VDCResources to the VDC Model. We would need to determine what resources the VDC model should track
I think this balances the features across all platforms:
One of the open questions is whether a particular interface (or other component) should be permitted to belong to multiple contexts. IMO this should not be allowed, as this is typically not permitted in my experience working with device contexts.
In an ASA, you can assign one interface to all contexts. Each context get a different (auto generated) Mac address. On Fortigate, you can also share one interface between vdoms, and use emacs to give them their own Mac address.
I want to throw in a proposed model here:
VDC Device = FK to Devices VDC ID = BigInt Name = Char Interfaces = M2M w/ join custom table Primary IPv4 = FK to IPAddress Primary IPv6 = FK to IPAddress Tenant = FK to Tenant
I think the VDC model needs a few more fields. For instance, config contexts, services, status, role, comments.
- Allow shared interfaces in instances where there are shared interfaces (ASA, FTD, and any other platforms that allow sharing)
I'm in agreement on the shared interfaces. If the VDC Resources is a separate model, maybe there can be a flag on whether that VDC platform allows interface sharing or not so it can be enforced correctly at the VDC level.
I'm in agreement on the shared interfaces. If the VDC Resources is a separate model, maybe there can be a flag on whether that VDC platform allows interface sharing or not so it can be enforced correctly at the VDC level.
We could also just add this on the DeviceType or something, such as a "VDC Type" or similar field
I model my vdc and vdom as separate devices in a master device with bays. In case of HA, I add them to the node where I want them to be active. (Like on a A/A ASA cluster, each model can have it's own active contexts). The only thing that is really missing in that modeling is the shared interfaces. Those I model as virtual sub interfaces. I don't really care about all the possible 'settings' that can be set to a vdc. Those are stored in my config backups or in custom fields or config contexts.
The only advantage I for the proposed feature, is that all interfaces can be linked to the physical device, in stead of only the bays. Maybe it is an idea to extend/rewrite the virtual chassis model to a virtual device model? It kind of does the same.
The VC model does something completely different, it aggregates interfaces from multiple devices onto a master device.
Isn't that what the virtual device will do to? I actually don't really like the current virtual chassis implementation. In our old cmdb, we created one virtualdevice that holds the ips, creds, ... And then the needed physical devices for the chassis members. This is more realistic, as you never know what the master device is. (You only know what it should be)
I think we could use the current FR in both directions. One physical device that groups multiple virtual devices, or one virtual device that groups multiple physical devices. Just some thoughts...
This is specifically for partitioning a single device into instances. This FR is in no way related to Virtual Chassis, which combines multiple devices (apart from virtual being in both names).
We are not going to combine two different models into one. Both this feature and virtual chassis will model distinct real world technologies.
I'm well aware how VDC, MCLAG, vdom, contexts, virtual chassis,.. work and differ.I use these technologies on daily basis. But if you take an abstract of these technologies, then you'll see the similarities between them, when you relate them to a cmdb. They all consists of different entities, that are grouped by one big entitie. Before netbox, we had a self developed asset management system. And there we used the same model for all those different technologies. Worked perfect. Sometimes you have to take a step backwards to see the bigger whole. It is just my view of course. Nothing against creating separate models, but then again, why do twice the same thing?
What overlap do you see in the models?
@PieterL75 VDCs can often be applied to virtual chassis devices, but virtual chassis do not seem to be to be similar to VDCs. Please further explain.
In both situations, you have a common asset that groups the parts.
In a Virtual Chassis
In a VDC
In a VDOM
and so one.
I think in the current VC implementation, the decision to set one device as master, and put all ports and mgmt on that one, is not a correct design. The master is a 'virtual' concept, as it can move from one switch to another, and brings along all IP addresses, vlans, ...
In a Virtual Chassis
In a VDC
In a VDOM
I still don't see the overlap here. 1 is aggregating ports for a number of physical devices and 1 is dis-aggregating ports from one device into a number of virtual devices.
Using multiple models also simplifies some of the frontend/backend logic as we don't have to account for the separate behaviours present in each of these. VDC and VDOM and Contexts and Instances can all be lumped together because for the most part their functionality is the same (apart from sharing interfaces and the like)
Virtual Chassis would be a better fit to combine with some cluster/HA model instead.
I think in the current VC implementation, the decision to set one device as master, and put all ports and mgmt on that one, is not a correct design. The master is a 'virtual' concept, as it can move from one switch to another, and brings along all IP addresses,
So, again, this comes down to NetBox's design. NetBox is designed to model the desired state of the network.
For example, in all of the virtual chassis that I have deployed, we always configure the stacks to have a primary master, a standby and members (Catalyst for example, we do it by setting priorities appropriately). Our desired state, normally, matches the actual state because we try to ensure that the one we want to be master stays master. Now, we do sometimes have times where these fall out of sync, however that is fine, it will fix itself next reload, but the desired state never changes (we want x switch to be master).
I think it could be beneficial, when we look at VPC's, to perhaps look at adapting the Virtual Chassis model, however a vPC is not like a virtual chassis.
I do think there are some tweaks to be made to the VC implementation myself, like I would prefer to have all the instances show under Virtual Chassis and instead only show the devices interfaces under the device itself, however this is personal preference.
I have started working on this, using the planned models.
Looking at the code, I would also add the type F5 vCMP (Virtual Clustered Multiprocessing). @DanSheps
Looking at the code, I would also add the type F5 vCMP (Virtual Clustered Multiprocessing).
We actually have decided to not worry about types for now. It didn't make much sense, for example take a look at a the F5 for example. No matter what "Device Type" (SKU) you have, it is always going to be a vCMP. Same with Cisco Nexus and ASA. If you have a Nexus 7706, it is always going to be a Nexus VDC, if you have a ASA 5515, it is always going to be a ASA security context.
The only one corner case I can see, is a Cisco Firepower, which can run ASA bare metal that can then be divided up into ASA security contexts or it will run firepower instances. That should be easy enough to extract based on the "platform" however.
Each device type/platform has an implied type and there isn't normally any deviation from that.
We actually have decided to not worry about types for now. It didn't make much sense, for example take a look at a the F5 for example. No matter what "Device Type" (SKU) you have, it is always going to be a vCMP.
In fact, no. Depending on the box, it can be able to support vCMP or not.
But I get your point from a generic standpoint. I will provide deeper feedback once I played with this modeling.
FYI, right now all of this(VDC, vcmp, vsys, vdom, etc) is modeled with Virtualization/cluster in our environment. One role is the "host" part and then the other one are the children part. The host and the children are then present in the same cluster. And we can combine two children in a virtual chassis way to show HA if we need to.
I'm looking at the VDC and Interface ui on https://beta-demo.netbox.dev/dcim/devices/88/interfaces/ and didn't see the option to put the VDC as a column of the Interface table, if I want to look at the hardware device and see which interfaces are in which VDC(s). Am I missing something or is this a good idea to add?
@mtinberg looks like @DanSheps just opened #10957 for this.
Currently the virtual device context is linked to the device for all aspects. I would have think that a virtual device context would have its own role, config context, interfaces, etc. For example, the device will be a physical firewall and the virtual contexst vdom or vsys (let's call them vfw). Each vfw can have its distinct role (ex: fwcampus or fwdc) or specific config context linked to their role (and we can also talked about custom fields). In a network infrastructure, the virtual device are acting like independent network devices. This modelization make them sub-citizen of devices which they are not, outside of the physical hosting part.
Currently the virtual device context is linked to the device for all aspects. I would have think that a virtual device context would have its own role, config context.
I agree with this. Although one possible problem with config context is how should it be applied for a device with or without VDCs? Implicitly on Fortinet Firewalls, without vdoms "enabled" (implicitly used), the vdom is called "root". Once vdoms are used, the context is then divided into "global" and then per-vdom. So in the case of config context, should it only apply to the device if VDC is not defined? and only apply to the VDCs if they are defined? Without differentiating the two, you may encounter problems with config context.
Perhaps a new issue should be created to track this? @jmanteau
With config_context eligibility, it may not be possible or desirable to model every different way that config inheritance may work on different platforms, but I agree that the VDC model should also have a foreign key to Device Roles (it already has Tenants and Tags, Platform is superfluous), and should probably be able to have its own virtual interfaces attached to the VDC, while physical interfaces are shared from the chassis. I would hope that between Device Roles, Tenants and Tags that you'd be able to get the right config context data rendered for the device, what is the config element you'd like to render that this doesn't work for?
For example, on https://beta-demo.netbox.dev/dcim/devices/1/interfaces/ I tried creating two Vlan100 interfaces attached to two different VDCs but it refuses to create the second Vlan100 interface. In many ways VDCs are like VMs (e.g. Cisco Nexus VDC is implemented using LXC, ASA context may be too) except that they don't get to run different software than the main unit (AFAIK), don't get to migrate and have a cluster size of one, all resources are pinned with no overcommit or sharing and physical interfaces are mapped directly to the VMs using hardware IO ACLs for performance.
—
Mark Tinberg @.***>
Division of Information Technology-Network Services
University of Wisconsin-Madison
From: David Sobon @.> Sent: Monday, November 21, 2022 5:56 AM To: netbox-community/netbox @.> Cc: Mark Tinberg @.>; Mention @.> Subject: Re: [netbox-community/netbox] Virtual Device Context (Issue #7854)
Currently the virtual device context is linked to the device for all aspects. I would have think that a virtual device context would have its own role, config context.
I agree with this. Although one possible problem with config context is how should it be applied for a device with or without VDCs? Implicitly on Fortinet Firewalls, without vdoms "enabled" (implicitly used), the vdom is called "root". Once vdoms are used, the context is then divided into "global" and then per-vdom. So in the case of config context, should it only apply to the device if VDC is not defined? and only apply to the VDCs if they are defined? Without differentiating the two, you may encounter problems with config config text.
Perhaps a new issue should be created to track this? @jmanteauhttps://github.com/jmanteau
— Reply to this email directly, view it on GitHubhttps://github.com/netbox-community/netbox/issues/7854#issuecomment-1321945177, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAS7UMYOM24U6JFH5ZGHQ3TWJNPPJANCNFSM5IF4X7QA. You are receiving this because you were mentioned.Message ID: @.***>
You're right about config context... I do not see a real use-case - but I could be wrong in other people's potential instance - where, on the single device, it would be different on the VDC (vdom) level.
As for interfaces with the same VLAN (but different IP address) on the same physical device... we do have multiple VDCs (vdom) on the same VLAN.. eg: using one VDC (vdom) as a router - instead of using a physically separate router - virtually connecting (using "VDOM links") to each of the other VDCs (vdom)... So without having multiple VLAN interfaces on the same device but different VDC, this limitation would fail to represent a valid real-world configuration.
Although... I am unsure why we are using tagged VLAN on a point-to-point virtual link.
@DanSheps @jeremystretch : It seems the current modeling choices have arisen a lot of questions. Should we open a new issue to track all of this ? I'm afraid the current architecture is too limiting comparing to real use cases.
The tagged vlan on the point to point virtual link is very similar to needing to tag "virtual" interfaces on devices to show what vlan they are. I think that whole idea of deciding what vlan an interface is in needs to be re-engineered.
We shouldnt need to mark a virtual interface as 802.1x access/tagged for example. We should be able to set the vlan the interface exists in without any config of tagged/untagged.
OK I found out why.
for non-hardware-accelerated virtual links, VLAN ID is not required information as the link is explicitly defined. for hardware-accelerated (NPU in fortinet speak) virtual links, VLAN ID is required as the link is not explicitly defined otherwise.
In this case, I think we can get away with just defining two unique virtual interface names, and using a connection between the two, without needing to use VLAN IDs explicitly.
NetBox version
v3.0.10
Feature type
New functionality
Proposed functionality
For lack of a better term, it would be ideal to support the common model of virtual device context. This should NOT model a specific vendor, but the common functionality of virtual device context in that interfaces can be assigned to them (in addition to VRFs if implemented #7852)
Use case
Platform specific implementations to look at to understand the commonly shared functionality:
Database changes
No response
External dependencies
No response