Today’s container orchestration engine solutions promote a model of requesting a specific quantity of resources (e.g. number of vCPUs.), a quantity range (e.g. min/max number of vCPUs) or not specifying them at all, to support the appropriate placement of workloads. This applies at the Cloud and at the Edge using Kubernetes or K3s (although the concept is not limited to Kubernetes based systems). The end state for the resource allocation is declared, but that state is an imperative definition of what resources are required. This model has proven effective, but has a number of challenges:
This project proposes a new way to do orchestration – moving from an imperative model to an intent driven way, in which the user express their intents in form of objectives (e.g. as required latency, throughput, or reliability targets) and the orchestration stack itself determines what resources in the infrastructure are required to fulfill the objectives. This new approach will continue to benefit from community investments in scheduling (determining when & where to place workloads) and be augmented with a continuous running planning loop determining what/how to configure in the system.
While this repository holds the planning component implementation it is key to note that it works closely together with schedulers, the observability and potentially analytics stacks. It is key that those schedulers are fed with the right information to make their placement decisions.
The planning component is essential for enabling Intent Driven Orchestration (IDO), as it will break down the higher- level objectives (e.g. a latency compliance targets) into dynamic actionable plans (e.g. policies for platform resource allocation, dynamic vertical & horizontal scaling, etc.). This enables hierarchical controlled systems in which Service Level Objectives(SLOs) are broken down to finer grained goal settings for the platform. A key input to the planning components, to determine the right set of actions, are the models that describe workload behaviour and the platform effects on the associated Quality of Service (QoS).
The initial goal is to focus on managing the QoS of a set of instances of a workload. Subsequently, the goals are expected to shift to End-to-End (E2E) management of QoS parameters in multi-tenant environments with mixed criticality. It is also a goal that the planning components will be easily extended and administrators will have the ability to swap in and out functionality through a plugin model. The architecture is intended to be extensible to support proactive planning and coordination between planners to fulfill overarching intents. It is expected that the imperative model and an Intent Driven Orchestration model will coexist.
To see the benefit of this model please review the deployment and associated objective manifest files:
Step 1) add the CRDs:
$ k apply -f artefacts/intents_crds_v1alpha1.yaml
Step 2) deploy the planner (make sure to adapt the configs to your environment):
$ k create ns ido
$ k apply -n ido -f artefacts/deploy/manifest.yaml
Step 3) deploy the actuators of interest using:
$ k apply -n ido -f plugins/<name>/<name>.yaml
These steps should be followed by setting up your default profiles (if needed).
We recommend the usage of a service mesh like Linkerd or Istio to ensure robust authentication, encryption and monitoring capabilities for the subcomponents of the planning framework themselves. After creating the namespace, <enable auto-injection; For Linkerd do:
$ k annotate ns ido linkerd.io/inject=enabled
or for Istio use:
$ k label namespace ido istio-injection=enabled --overwrite
For more information on running and configuring the planner see the getting started guide.
There are three key packages enabling the Intent Driven Orchestration model:
Documentation and implementation notes for these components can be found here:
Furthermore, notes on the pluggability can be found here and general design notes can be found here.
Report a bug by filing a new issue.
Contribute by opening a pull request. Please also see CONTRIBUTING for more information.
Learn about pull requests.
Reporting a Potential Security Vulnerability: If you have discovered potential security vulnerability in Intent-Driven Orchestration, please send an e-mail to secure@intel.com. For issues related to Intel Products, please visit Intel Security Center.
It is important to include the following details:
Vulnerability information is extremely sensitive. Please encrypt all security vulnerability reports using our PGP key.
A member of the Intel Product Security Team will review your e-mail and contact you to collaborate on resolving the issue. For more information on how Intel works to resolve security issues, see: vulnerability handling guidelines.