The Sriov Network Operator is designed to help the user to provision and configure SR-IOV CNI plugin and Device plugin in the Openshift cluster.
SR-IOV network is an optional feature of an Openshift cluster. To make it work, it requires different components to be provisioned and configured accordingly. It makes sense to have one operator to coordinate those relevant components in one place, instead of having them managed by different operators. And also, to hide the complexity, we should provide an elegant user interface to simplify the process of enabling SR-IOV.
For more detail on installing this operator, refer to the quick-start guide.
The SR-IOV network operator introduces following new CRDs:
SriovNetwork
OVSNetwork
SriovNetworkNodeState
SriovNetworkNodePolicy
A custom resource of SriovNetwork could represent the a layer-2 broadcast domain where some SR-IOV devices are attach to. It is primarily used to generate a NetworkAttachmentDefinition CR with an SR-IOV CNI plugin configuration.
This SriovNetwork CR also contains the ‘resourceName’ which is aligned with the ‘resourceName’ of SR-IOV device plugin. One SriovNetwork obj maps to one ‘resoureName’, but one ‘resourceName’ can be shared by different SriovNetwork CRs.
This CR should be managed by cluster admin. Here is an example:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: example-network
namespace: example-namespace
spec:
ipam: |
{
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [{
"dst": "0.0.0.0/0"
}],
"gateway": "10.56.217.1"
}
vlan: 0
resourceName: intelnics
It is possible to add additional capabilities to the device configured via the SR-IOV configuring optional metaplugins.
In order to do this, the metaPlugins
field must contain the array of one or more additional configurations used to build a network configuration list, as per the following example:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: example-network
namespace: example-namespace
spec:
ipam: |
{
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [{
"dst": "0.0.0.0/0"
}],
"gateway": "10.56.217.1"
}
vlan: 0
resourceName: intelnics
metaPlugins : |
{
"type": "tuning",
"sysctl": {
"net.core.somaxconn": "500"
}
},
{
"type": "vrf",
"vrfname": "red"
}
A custom resource of OVSNetwork could represent the a layer-2 broadcast domain attached to Open vSwitch that works in HW-offloading mode. It is primarily used to generate a NetworkAttachmentDefinition CR with an OVS CNI plugin configuration.
The OVSNetwork CR also contains the resourceName
which is aligned with the resourceName
of SR-IOV device plugin. One OVSNetwork obj maps to one resourceName
, but one resourceName
can be shared by different OVSNetwork CRs.
It is expected that resourceName
contains name of the resource pool which holds Virtual Functions of a NIC in the switchdev mode.
A Physical function of the NIC should be attached to an OVS bridge before any workload which uses OVSNetwork starts.
Example:
apiVersion: sriovnetwork.openshift.io/v1
kind: OVSNetwork
metadata:
name: example-network
namespace: example-namespace
spec:
ipam: |
{
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [{
"dst": "0.0.0.0/0"
}],
"gateway": "10.56.217.1"
}
vlan: 100
bridge: my-bridge
mtu: 2500
resourceName: switchdevnics
The custom resource to represent the SR-IOV interface states of each host, which should only be managed by the operator itself.
The spec is rendered by sriov-policy-controller, and consumed by sriov-config-daemon. Sriov-config-daemon is responsible for updating the ‘status’ field to reflect the latest status, this information can be used as input to create SriovNetworkNodePolicy CR.
An example of SriovNetworkNodeState CR:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodeState
metadata:
name: worker-node-1
namespace: sriov-network-operator
spec:
interfaces:
- deviceType: vfio-pci
mtu: 1500
numVfs: 4
pciAddress: 0000:86:00.0
status:
interfaces:
- deviceID: "1583"
driver: i40e
mtu: 1500
numVfs: 4
pciAddress: 0000:86:00.0
maxVfs: 64
vendor: "8086"
Vfs:
- deviceID: 154c
driver: vfio-pci
pciAddress: 0000:86:02.0
vendor: "8086"
- deviceID: 154c
driver: vfio-pci
pciAddress: 0000:86:02.1
vendor: "8086"
- deviceID: 154c
driver: vfio-pci
pciAddress: 0000:86:02.2
vendor: "8086"
- deviceID: 154c
driver: vfio-pci
pciAddress: 0000:86:02.3
vendor: "8086"
- deviceID: "1583"
driver: i40e
mtu: 1500
pciAddress: 0000:86:00.1
maxVfs: 64
vendor: "8086"
From this example, in status field, the user can find out there are 2 SRIOV capable NICs on node 'work-node-1'; in spec field, user can learn what the expected configure is generated from the combination of SriovNetworkNodePolicy CRs. In the virtual deployment case, a single VF will be associated with each device.
This CRD is the key of SR-IOV network operator. This custom resource should be managed by cluster admin, to instruct the operator to:
An example of SriovNetworkNodePolicy CR:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: policy-1
namespace: sriov-network-operator
spec:
deviceType: vfio-pci
mtu: 1500
nicSelector:
deviceID: "1583"
rootDevices:
- 0000:86:00.0
vendor: "8086"
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: "true"
numVfs: 4
priority: 90
resourceName: intelnics
In this example, user selected the nic from vendor '8086' which is intel, device module is '1583' which is XL710 for 40GbE, on nodes labeled with 'network-sriov.capable' equals 'true'. Then for those PFs, create 4 VFs each, set mtu to 1500 and the load the vfio-pci driver to those virtual functions.
In a virtual deployment:
When multiple SriovNetworkNodeConfigPolicy CRs are present, the priority
field
(0 is the highest priority) is used to resolve any conflicts. Conflicts occur
only when same PF is referenced by multiple policies. The final desired
configuration is saved in SriovNetworkNodeState.spec.interfaces
.
Policies processing order is based on priority (lowest first), followed by name
field (starting from a
). Policies with same priority or non-overlapping
VF groups (when #-notation is used in pfName field) are merged, otherwise only
the highest priority policy is applied. In case of same-priority policies and
overlapping VF groups, only the last processed policy is applied.
When using #-notation to define VF group, no actions are taken on virtual functions that
are not mentioned in any policy (e.g. if a policy defines a vfio-pci
device group for a device, when
it is deleted the VF are not reset to the default driver).
When ExternallyManage
is request on a policy the operator will only skip the virtual function creation.
The operator will only bind the virtual functions to the requested driver and expose them via the device plugin.
Another difference when this field is requested in the policy is that when this policy is removed the operator
will not remove the virtual functions from the policy.
Note: This means the user must create the virtual functions before they apply the policy or the webhook will reject the policy creation.
It's possible to use something like nmstate kubernetes-nmstate or just a simple systemd file to create the virtual functions on boot.
This feature was created to support deployments where the user want to use some of the virtual funtions for the host communication like storage network or out of band managment and the virtual functions must exist on boot and not only after the operator and config-daemon are running.
It is possible to disable SR-IOV network operator config daemon plugins in case their operation is not needed or un-desirable.
As an example, some plugins perform vendor specific firmware configuration
to enable SR-IOV (e.g mellanox
plugin). certain deployment environments may prefer to perform such configuration
once during node provisioning, while ensuring the configuration will be compatible with any sriov network node policy
defined for the particular environment. This will reduce or completely eliminate the need for reboot of nodes during SR-IOV
configurations by the operator.
This can be done by setting SriovOperatorConfig default
CR spec.disablePlugins
with the list of desired plugins
to disable.
Example:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovOperatorConfig
metadata:
name: default
namespace: sriov-network-operator
spec:
...
disablePlugins:
- mellanox
...
NOTE: Currently only
mellanox
plugin can be disabled.
It is possible to drain more than one node at a time using this operator.
The configuration is done via the SriovNetworkNodePool, selecting a number of nodes using the node selector and how many nodes in parallel from the pool the operator can drain in parallel. maxUnavailable can be a number or a percentage.
NOTE: every node can only be part of one pool, if a node is selected by more than one pool, then it will not be drained
NOTE: If a node is not part of any pool it will have a default configuration of maxUnavailable 1
Example:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkPoolConfig
metadata:
name: worker
namespace: sriov-network-operator
spec:
maxUnavailable: 2
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker: ""
Feature gates are used to enable or disable specific features in the operator.
NOTE: As features mature and graduate to stable status, default settings may change, and feature gates might be removed in future releases. Keep this in mind when configuring feature gates and ensure your environment is compatible with any updates.
Parallel NIC Configuration (parallelNicConfig
)
Resource Injector Match Condition (resourceInjectorMatchCondition
)
MatchConditions
feature introduced in Kubernetes 1.28. This ensures the webhook only targets pods with the k8s.v1.cni.cncf.io/networks
annotation, improving reliability without affecting other pods.Metrics Exporter (metricsExporter
)
Manage Software Bridges (manageSoftwareBridges
)
Mellanox Firmware Reset (mellanoxFirmwareReset
)
mstfwreset
before a system reboot. This feature is specific to Mellanox network devices and is used to ensure that the firmware is properly reset during system maintenance.To enable a feature gate, add it to your configuration file or command line with the desired state. For example, to enable the resourceInjectorMatchCondition
feature gate, you would specify:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovOperatorConfig
metadata:
name: default
namespace: sriov-network-operator
spec:
featureGates:
resourceInjectorMatchCondition: true
...
This operator is split into 2 components:
The controller is responsible for:
The sriov-config-daemon is responsible for: