Open anitsh opened 4 years ago
Fully Integrated Systems
In this chapter, we will start looking at the orchestration and virtual management layer on top of the network control layer. Virtual management refers to an orchestration platform which deals with tasks such as provisioning and spinning up virtual machines or containers to perform specific network and security tasks.
Orchestration and virtual management platforms interact with the network control layer to create and provision networking requirements such as policies, traffic redirection, and service insertion for the virtual workloads that they create.
The virtual management layer which we are referring to here is mainly intended to create workloads related to networking and security. It is not for general workloads and spinning up virtual servers to perform other enterprise applications (i.e. a database for a human resource management system).
In this chapter, we will also review the ONAP (Open Network Automation Platform) orchestration platform, as well as some other open source orchestration projects that interact with ONAP, such as CORD, Trellis and OSC (Open Security Controller). In addition, we will take a look at OPNFV, MANO, and Akraino.
Open Network Automation Platform (ONAP)
According to its ONAP's website, "ONAP provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management".
Hosted by The Linux Foundation, ONAP is an open source project that started in March of 2017 as a response to the need of telcos and service providers to deliver on-demand network services leveraging the existing infrastructure. It is in fact a merger of two other projects: Open-O (China Telecom and ZTE) and Open ECOMP (AT&T).
ONAP - Quick Summary Name | Open Network Automation Platform (ONAP) |
---|---|
By | The Linux Foundation |
Where it runs | On separate hosts, recommended to run on Kubernetes or OpenStack |
What it does | ONAP is a platform that orchestrates the lifecycle of virtual network services in a software defined networking environment |
Features | Open source; creates services that consist of VNFs and policies. Runs and executes the services. Has a closed-loop. |
What it can do out-of-the-box | ONAP relies on many other components and platforms; it will not be able to deliver the real functions without the other components out-of-the-box. There are multiple use cases for ONAP, such as uCPE, edge networking services, Voice-over-LTE, virtual firewall, virtual DHCP server, etc. You can deploy a full or minimum ONAP environment using OpenStack. A full ONAP environment requires numerous virtual machines and gigabytes of RAM. |
ONAP can be deployed with Kubernetes, OpenStack, or manually on your Linux flavor.
Key things to remember about ONAP: It's a platform to orchestrate the lifecycle of VNFs in a software defined environment It consists of multiple applications that integrate with each other It is divided into two main functional frameworks: design time and run time.
ONAP Architecture ONAP has a complex architecture with multiple applications and components that are integrated with each other.
ONAP is an orchestration platform for automation, service orchestration, network orchestration, and has capabilities to orchestrate other services, such as security, physical network, and network function virtualization.
ONAP’s main target market is telco and service providers. However, cloud and datacenter providers and enterprises can leverage the benefits of ONAP for complete lifecycle automation of services that they would automate.
ONAP can help the acceleration of 5G for service providers by creating a dynamic, open standard environment for new 5G software services, such as network slicing, network function virtualization, edge computing and service function chaining.
ONAP Design Time Environment
The Design Time environment is a development framework with all the tools and repositories that administrators can use to create a service. A service consists of multiple elements and artifacts; the most important one is the VNF (Virtual Network Function), which should be packaged to run on different virtual environments and hypervisors.
Administrators create services and products using the Design Time environment; once they are validated, they will be transferred to the Service Catalog database of the Run Time. Run Time components will invoke the service details from Service Catalog when required in order to deploy the VNF in the virtual infrastructure.
The Design Time framework includes:
Service Design and Creation (SDC)
To define system assets and their policies.
VNF Software Development Kit (VNFSDK) and VNF Validation Program (VVP)
For VNF packaging and validation of VNFs.
Policy Creation (POLICY)
To define policies that need to be maintained or enforced.
Closed Loop Automation Management Platform (CLAMP)
To design and manage closed control loops.
Optimization Framework (OOF)
To optimize the application and services.
ONAP Run Time Environment
ONAP’s Run Time environment executes the service catalog and policies that were created and distributed by the Design Time framework. The Run Time invokes the services when they are requested by external parties (via the ONAP CLI, WEB GUI or API call), or it may get triggered by an internal process within the Run Time environment.
Following are the main components of the Run Time environment in ONAP:
Service Orchestrator (SO)
The Service Orchestrator is an automation engine in the ONAP Run Time environment. It processes and executes a list of tasks (i.e. a runbook) related to applying a service policy during the creation of the service, as well as altering and changing the service parameters. SO is also able to communicate with OpenStack.
SO processes a runbook that may include tasks such as creating a virtual machine in OpenStack, creating virtual networks and assigning IP addresses from IPAM, applying security groups, creating service insertion or service chaining, etc.
SO processing is very high level; it uses multiple components, drivers and southbound protocols to execute the work. SO has a full end-to-end view of the virtual infrastructure, network and applications.
Software Defined Network Controller (SDNC)
It is responsible for executing the network configuration.
Application Controller (APPC)
It is responsible for executing and configuring the Virtual Network Functions (VNF).
Virtual Function Controller (VF-C)
It is responsible for the lifecycle management of the Virtual Network Functions (VNF) which are run by the VNF manager.
Active and Available Inventory (A&AI)
It is responsible for the real-time view of system resources, products and their relationships.
ONAP: Closed Loop Automation The Closed Loop Automation is one of the features of ONAP. This module proactively responds to network and service conditions without any human interaction. Closed Loop Automation components use the other components and plugins to detect problems in the network, and identify the appropriate remediation. For actions, the Closed Loop Automation will notify the Service Orchestrator or one of its controllers to take action and execute the change. The Closed Loop Automation can also generate alerts for operators, instead of automatically fixing the issues.
ONAP CLI
ONAP includes a command line interface, a web GUI, as well as APIs. The CLI can be used by the operator, as well as by automation tools to automate communication with ONAP. ONAP CLI provides the commands for the following features:
ONAP microservice discovery
ONAP external system and VNF cloud onboarding
ONAP customer and subscription management
ONAP product and service onboarding
ONAP network service lifecycle management.
An administrator creates a firewall service in the ONAP Design Time environment. The service will include the virtual firewall VM files that needs to run in the uCPE hypervisor (lets say uCPE is a compute node, part of OpenStack of ISP), as well as network parameters (the LAN/Trust and WAN/Untrust interfaces), IP addresses, firewall policies, NAT, DHCP server, DNS proxy configuration, and the network configuration that should dictate what packets from the subscriber must go to the firewall before going out.
The ONAP Design Time environment validates the service and it will get transferred to the service catalog.
A request comes from the ONAP CLI to deploy the virtual firewall for Subscriber-1.
The ONAP Run Time environment starts fetching the service recipe from the service catalog and starts executing the process.
ONAP Run Time uses different applications and plugins to create the virtual machine on Subscriber-1 uCPE, and create its network. After creating the firewall VM, it calls the API to the VNF manager (firewall manager software, in our example) to apply policies on the firewall to allow traffic out from the subscriber and perform source NAT.
The ONAP Run Time uses its SDN manager to communicate with the underlay network to apply policies to route the return traffic from the underlay network to the virtual firewall.
As a super orchestrator, ONAP integrates with multiple virtual infrastructure management platforms, network management platforms, and control environments. Next, we will explore several such platforms used by ONAP to perform full orchestration: OpenStack, CORD, Trellis.
ONAP and OpenStack
OpenStack is a cloud orchestration platform. OpenStack is a virtual infrastructure management platform to manage compute, storage and virtual networking. You can compare OpenStack with the VMware suite of products, such as vSphere, NSX, and other components. OpenStack orchestrates compute, storage and networking to allow users to create virtual machines, create templates, migrate them between hosts, apply security groups, etc.
ONAP and OpenStack
The main components of OpenStack are:
Heat (Orchestration)
Nova (Manages hosts and hypervisors)
Neutron (Virtual networking)
Cinder and Swift (Manage object and block level storage)
Horizon (Dashboard).
OpenStack Neutron is the networking component of OpenStack. Neutron is a network virtualization component that creates virtual networks between virtual workloads in the OpenStack environment. Neutron uses VXLAN as the default encapsulation mechanism to send traffic between the tenants (projects). Neutron can integrate with other SDN network controllers using a plugin.
In order to create and start Virtual Network Functions (VNFs), as well as to apply the virtual network settings (to connect the new VNF to the virtual networks within the OpenStack), ONAP has plugins and drivers to communicate with OpenStack. This will help users such as service providers to reuse their existing investments and infrastructure for OpenStack to deploy and manage the new VNFs. In addition, we need to remember that ONAP itself sets up and gets installed using OpenStack (or Kubernetes).
ONAP and CORD
CORD (Central Office Re-architected as a Datacenter) is a bundled platform of a virtualization platform (currently, OpenStack) and a network control layer (ONOS) that are orchestrated by CORD XOS. CORD has multiple other integrated components as well. CORD's aim is to transform the service providers' and telco's edge network into an agile model, flexible and ready to provide next generation services. The edge network normally refers to the location in the service provider network which connects to consumers and subscribers. The edge of the operator network (such as the central office for telcos and the head-end for cable operators) is where operators connect to their customers.
CORD is based on a completely integrated bundle of multiple open source projects. It aims to deliver an open, cloud-native, programmable and agile platform for service providers to leverage the SDN and NFV technologies and use this platform to build and deliver virtual network services to their consumers. CORD is built on commodity hardware and the latest designs of OCP (Open Compute Project) and cloud initiatives.
The CORD project has applications in several markets: residential, enterprise, and mobile.
M-CORD (Mobile CORD)
Mobile CORD (M-CORD) targets mobile radio access and mobile core platforms. M-CORD provides a framework for carriers deploying 5G mobile networks. It includes virtualization of Radio Access Network (RAN), as well as Evolved Packet Core (EPC), the mobile core. By virtualizing the RAN and EPC, service providers can leverage the SDN and NFV technologies to create and build mobile edge applications and services.
M-CORD disaggregates the legacy RAN and EPC, allows service providers to use virtualized infrastructure to build an agile, software defined platform supporting the new initiatives of cloud and 5G.
The Role of M-CORD in a Mobile Service Provider Infrastructure
R-CORD (Residential CORD)
Residential CORD, or R-CORD, aims to leverage the SDN and NFV technologies to evolve the broadband services in the residential market. R-CORD doesn’t change the physical access method for subscribers. Instead, it utilizes the existing invested infrastructure based on different access technologies such as GPON, EPON, DOCSIS, etc.
Using its agile platform, R-CORD virtualizes the network services for subscribers, allows service providers to implement distributed network edge services as a virtual machine or a container such as CDN, DHCP, DNS, AAA (Authentication, Authorization, Accounting), VOD (Video On Demand), etc. R-CORD uses open source technologies such as OpenStack and ONOS to deliver services.
VOLTHA (Virtual OLT Hardware Abstraction), another open source project, abstracts the specialized concentrator access equipment (such as OLT, GPON, DOCSIS, head-end units) to make it OpenFlow-capable and be managed via OpenFlow and standard SDN controllers. This allows R-CORD to control this special purpose equipment.
R-CORD Architecture
E-CORD (Enterprise CORD)
Enterprise CORD, or E-CORD, aims to evolve the legacy WAN services that service providers are delivering to enterprise customers. E-CORD uses commodity hardware to create an infrastructure to run virtual network functions at different layers, such as customer premises (as a universal CPE), service provider central offices, etc. Using E-CORD, service providers will be able to provide agile and on demand services, such as:
VPN
Internet Access
Firewall and border protection
CDN (Content Delivery Network)
Network core functions such as DNS, DHCP, etc.
SD-WAN
Traffic optimization and enhanced QoS
Zero Touch Provisioning of commodity hardware at customer premises and sites
Correctly measured monitoring services to deliver an outcome-based SLAs and KPIs
A platform that enables creation and delivery of innovative services.
E-CORD Touch Points in a Service Provider Network
ONAP Communication with CORD
ONAP can use CORD as an underlay orchestrator component. ONAP and CORD both have open APIs that can be used for integration. Using CORD as an underlay for ONAP will reduce the workload on ONAP and its Service Orchestrator (SO), as the SO will be able to request CORD to build a whole infrastructure and service policies with a minimum number of tasks to be executed by ONAP’s SO.
The following diagram illustrates a high-level relationship between ONAP, CORD and other cloud platforms such as OpenStack.
High-Level Relationship between ONAP and Cloud Platforms
Trellis
Trellis is another open source project created and managed by ONF. Trellis is a networking project designed to deliver a standard L2/L3 leaf-spine switching fabric for datacenters leveraging the ONOS Controller. Trellis uses white box bare metal switches and open source software to create the underlying switching and packet forwarding hardware for a datacenter. Trellis uses the ONOS SDN controller to control the switches in the fabric. Trellis runs an agent software on white box switches to allow ONOS to manage the switches. There is no routing protocol or any networking software stack running on white box switches in the fabric. Trellis also includes multiple applications that run on ONOS and are used to manage the fabric, as well as provide monitoring, giving a detailed view of the datacenter network.
Leaf-Spine Design in a Datacenter
Apart from managing the physical underlay switches, Trellis also has capabilities to create and manage overlay networks in order to connect the isolated tenant networks on top of the underlay network. For external connectivity, Trellis provides a virtual router solution which makes the whole fabric act as a data plane for its distributed virtual router.
Trellis is also used as a component in the CORD platform, as a network infrastructure to provide connectivity to the CORD POD. However, Trellis can be used independently from CORD as a network platform for a datacenter.
High-Level Architecture of Trellis in a Datacenter
Trellis Architecture
The main components of Trellis are:
ONOS SDN controller.
A set of Trellis applications that run on ONOS.
Trellis agent software that runs on top of ONL on a white box switch. This agent is actually an Indigo OpenFlow agent that communicates with the switch silicon SDK or the driver.
Trellis management UI and northbound APIs.
Trellis Components on a White Box Switch
ONAP Integration with Trellis In order to execute the processes required in networking to redirect traffic for Virtual Network Functions (VNFs) it manages and automates, ONAP can control Trellis using plugins and APIs.
Open Source MANO
OSM or Open Source MANO (Management And Orchestration) is an open source project hosted by ETSI for development of NFV Management and Orchestration, aligned with the ETSI NFV architecture. It has an operator-led community and offers production-ready an open source MANO stack that meets the requirements of commercial VNFs.
OSM didn’t start from scratch, but as an integration of:
OpenMANO as the seed Resource Orchestrator
Riftware as the seed Service Orchestrator
Juju as an external reference for VNF configuration and management.
OSM is a tight integration of existing open source modules from Telefonica’s OpenMANO project, Rift.io orchestrator and Canonical’s Juju Charms as VNFM. OSM works with virtualization platforms such as AWS EC2, VMware vCloud Director, etc.
The OSM project consists of three basic components: Service Orchestrator (SO) SO is responsible for the end-to-end service orchestration and provisioning of VNFs and service chaining. SO manages the automation workflow for service deployment. OSM uses RIFT.io as orchestration engine. Resource Orchestrator (RO) RO is responsible to communicate with virtualization platforms such as OpenStack and VMware. RO provisions the NFV virtual workloads on virtualization platforms. VNF Configuration and Abstraction (VCA) VCA is responsible for the configuration of VNFs that are provisioned by the Resource Orchestrator. OSM uses Canonical’s Juju Charms as an automation engine to apply the required configuration to the provisioned VNFs.
OSM vs ONAP
OSM and ONAP are similar. Initial versions of OSM had limited capabilities. However, there are new features being introduced in recent releases of OSM. ONAP’s scope is bigger than OSM. ONAP is a comprehensive service management orchestration platform, but OSM is mainly a Virtual Network Function Orchestrator (VNFO).
Open Platform for NFV (OPNFV)
According to OPNFV's website,
"Open Platform for NFV (OPNFV) facilitates the development and evolution of NFV components across various open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks".
OPNFV is hosted by The Linux Foundation and aims to create a standard-grade NFV platform. Its current objective is to create a standard open source platform for Virtual Network Functions in order to run on different platforms. For example, if OpenStack and VMware vSphere are both OPNFV-certified, we can load an OPNFV-certified VNF (such as a virtual load balancer) on any of these certified platforms and expect a similar functional behavior.
Current Scope of OPNFV
The initial phase of OPNFV is limited to building standards for the NFV infrastructure (NFVI) and Virtualized Infrastructure Management (VIM).
Open Security Controller (OSC)
Open Security Controller (OSC) is an open source software defined security orchestration solution hosted by The Linux Foundation. OSC automates the deployment and provisioning of Virtualized Network Security Functions (VNSC), which can provide on-demand protection between workloads (East-West) as well as filtering traffic to and from outside networks (North-South). The foundation of OSC is based on Network Function Virtualization, which allows distributed and flexible provisioning of virtual security appliances such as IPS, IDS, WAF and Firewalls within the network.
Unlike traditional network security physical appliances, where a very limited number of large firewalls are used to protect the resources, the new distributed security systems utilize virtual network functions, which can be distributed anywhere in a cloud environment, rather than on a whole site or datacenter POD.
In order to create and provision virtual machines or containers that can provide a network function, OSC has tight integration with virtual management platforms such as OpenStack, Kubernetes, etc. This is called Virtual Network Function (VNF), and the technology is referred to as Network Function Virtualization (NFV) in industry terms.
Open Security Controller - Quick Summary Name | Open Security Controller (OSC) |
---|---|
By | The Linux Foundation |
Where it runs | On a Linux host |
What it does | Provision virtual firewall, IPS, and other components. Communicate with SDN controllers to create service chaining and ensure traffic is routed via the provisioned security functions. |
What it can do out-of-the-box | OSC has built-in capabilities to integrate with OpenStack and Kubernetes to provision firewalls and other virtual security services. |
For example, if OSC receives a policy to enforce all traffic going from Virtual Machine A to Virtual Machine B to pass via a firewall, the OSC will perform the following high-level actions:
Call an API on virtual machine management platform (i.e. OpenStack) to provision a virtual machine using the base image of virtual firewall, along with the networks that it needs to connect to.
Virtual machine management platform (i.e. OpenStack) provisions the virtual machine, reports back the Virtual Machine details, such as IP address, MAC address allocated for this VNF, etc.
OSC calls the API to the network control layer (i.e OpenDaylight or Tungsten Fabric) to create a service chain that sends all traffic from VM A to this newly created VNF.
This is the main duty of OSC: to build an infrastructure for provisioning security components. In addition, OSC also has some capabilities to perform other security-related tasks, such as applying security policies.
OSC can also be used directly by cloud tenants. Tenants can choose their required controls from a catalog of security service functions and create a logical service policy to add a virtual security function such as a virtual firewall, virtual IPS, virtual WAF or virtual load balancer, and define how the traffic should route towards the virtual security services.
OSC Architecture OSC interacts with multiple systems within a cloud environment. For example, it interacts with the SDN controller, virtual machine management and security function managers.
Open Security Controller Conceptual Architecture
OSC Interactions
OSC provides a web GUI, as well as northbound APIs for management. In order to create a new network security function (i.e. IPS/Firewall) and redirect the network traffic to that specific service, security administrators can use the web interface to define security policies and interact with OSC. OSC’s northbound APIs allow other systems to interact with OSC to provision network security functions. For example, cloud platforms can be integrated with OSC, to allow a tenant to create and manage security services.
As far as southbound communications are concerned, OSC communicates with three different systems:
Virtualization management systems
OSC includes a connector to communicate with virtualization management systems such as OpenStack and Kubernetes. This connector directly calls the Virtual Infrastructure Manager (VIM) APIs in order to provision virtual network security functions. OSC Virtualization Management plugin also subscribes to notification events from the VIM system in order to receive information and status related to provisioned virtual network security workloads (virtual machines or containers).
SDN controllers
OSC supports communication with multiple networking and SDN controllers. OSC has a built-in connector that works with multiple SDN controllers. OSC uses the SDN controller plugin to implement traffic redirection or Service Function Chaining (SFC) to send specific traffic to the newly created network security function.
Security Function Managers
OSC uses this connector to communicate with security function systems such as IPS manager, firewall manager, security policy manager, etc. Using this plugin, OSC will be able to call the Security Function Manager APIs to apply specific policy updates, device group membership settings, etc., to the newly created virtual network function.
OSC Use Case: Microsegmentation OSC can be used to implement microsegmentation in OpenStack. With OSC, the logical group of workloads can be protected via a dedicated Virtual Network Security Function.
Open Security Controller - Microsegmentation
In addition, OSC can place the VNSF on different physical hosts to ensure high availability and efficiency. Microsegmentation will help the datacenter and enterprise infrastructure team to not make any changes on their existing physical firewalls or deploy any new firewalls. Microsegmentation and the use of VNSF will eliminate any changes on physical security devices and OSC will automate the provisioning workloads of VNSF.
OSC Use Case: Segmentation within a Multi-Tenant Environment Another use case of microsegmentation is the traffic isolation between tenants in a multi-tenant environment. Cloud service providers will be able to use OSC to inject a virtual security function between tenants if they need to establish communication with each other. OSC can provision simple IPS/IDS for traffic traversing between the two tenants to ensure the tenants are protected from attacks and malware that may come from another tenant. OSC Multi-Tenancy Use Case
Microsegmentation is designed to provide isolation and service insertion using the shared infrastructure. It does not require new infrastructure.
Akraino Edge Stack
Akraino Edge Stack is a new open source project hosted by The Linux Foundation (as of February 2018). It is still in its early stages, and the source code and wiki are not yet publicized (as of July 2018). The project:
"will create an open source software stack to improve the state of edge cloud infrastructure for carrier, provider and IoT networks".
The Akraino Edge Stack code is contributed by AT&T and Intel Corporation:
AT&T contributed a software stack designed for carrier scale edge computing applications running in virtual machines and containers.
Intel has committed to open source key components of its Wind River Titanium Cloud portfolio, as well as Intel's Network Edge Virtualization Software Development Kit (NEV SDK).
Akraino's goal is to develop a fully virtualized edge platform based on OpenStack, Kubernetes and ONAP.
Akraino's Proposed Architecture
Where Is the Edge? Network Edge in service providers is where the service provider logically connects to their customers.It is controlled by the service provider. Akraino also includes the CPE device, which is generally located on customer premises.
Optimal Zone for Edge Placement
Akraino Edge Stack is the first open source collaborative community project exclusively focused on integrated distributed cloud edge platform. Akraino aims to address the current challenges with the Network Edge, such as:
Large scale
Requiring simple operations, such as Zero-Touch Provisioning, Zero-Touch Operation and Zero-Touch Lifecycle
Cost.
Akraino Edge Stack integrates multiple open sources to supply holistic Edge Platform, Edge Application, and Developer APIs ecosystem.
Akraino Edge Stack Key Principles
Design State:
Reduce complexity by defining a fixed set of configurations
Design applications to be cloud-native optimized from the beginning
Secure platform and services
Turn-key solution for service enablement and enabling rapid service introduction
VNF assessment and verification to ensure whether the application is fit to run at the edge (for example, latency sensitiveness, code quality, etc.)
Build:
Low startup cost by using existing or minimal investment on x86 compute nodes
Low latency placement and processing
Plug-and-play modular architecture
Operate:
Zero touch provisioning, operations and lifecycle
Automated maturity measurement - operations, designs and services
Software abstraction
Service orchestration using ONAP as common platform
Akraino Edge Stack Flavors
In terms of sizing, the following Akraino flavors have been proposed:
Rover: Small, installed at remote customer premises
Satellite: Remote sites, 1 or 2 servers
Unicycle: Small POD with 1 rack
Tricycle: Medium POD with 3 racks
Cruiser: Large POD with 6 racks.
Edge Point of Delivery
Learning Objectives (Review) You should now be able to:
Discuss about the role of orchestration and management in open networking and NFV.
Analyze integration touch points of modern networks and cloud environments.
Discover open source orchestration platforms that can be integrated with business applications, OSS and BSS.
Summary
In this chapter, we reviewed ONAP, the super orchestrator for open source networking. ONAP has many capabilities and use cases due to its design; it can integrate with multiple external OSS (Operational Support System) and BSS (Business Support System) in service providers, as well as other applications. We also talked about CORD and Trellis and how they are accelerating the evolution in service provider networks, in order to simplify the service deployment and reduce the time to market. Then, we looked at OSC (Open Security Controller), a software defined security platform for applying security profiles as virtual network functions in different locations in a network.
These are all possible thanks to a programmable and software defined network that is able to re-route and send specific traffic to a VNF whenever it is activated, or re-route the traffic once the VNF is obsolete or removed.
Learning Objectives
By the end of this chapter, you should be able to: