readersclub / opensource-networking-technologies

Introduction to Open Source Networking Technologies EDX MOOC
MIT License
12 stars 0 forks source link

Chapter 9. Orchestration, Management, Policy #10

Open anitsh opened 4 years ago

anitsh commented 4 years ago

Learning Objectives

By the end of this chapter, you should be able to:

        Explore Network Function Virtualization, or NFV.
        Discuss about virtual firewalls, routers and load balancers, and how they are used.
anitsh commented 4 years ago

Introduction to Network Function Virtualization

In this chapter, we will explore Network Function Virtualization (NFV), which is a complement to SDN and cloud initiatives. NFV started a few years ago, after virtualization became a successful mainstream, and networking started looking at virtualization to build flexible routers, load balancers, and firewalls.

In current industry terms, NFV is defined as virtualizing firewalls, IPS/ IDS, and load balancers. These functions have been served on large, dedicated appliances for many years. No one knew that one day, the whole firewall with that large chassis would get virtualized and run as a virtual machine.

Apart from routers, firewalls, and load balancers, there are other network functions that have already been virtualized, such as content caching (i.e. Squid proxy), DNS, DHCP, IP-PBX, UC and AAA (Authentication Authorization Accounting). These network functions used to run on dedicated servers. However, they got virtualized by system administrators far before the networking industry realized the value of virtualization and how to use NFV.

Some important NFV use cases are:

        Service Providers
        - At the core, virtualizing routers
        - At the edge, virtualizing CPE (Customer Premises Equipment)
        Enterprises
        - Decentralizing network functions, such as firewall and load balancer
        - Building micro-segments and microservices
        Cloud and Datacenter Providers
        - Building a virtualized infrastructure for tenants
        - Decentralizing network functions, such as firewall and load balancer.

The most important thing in NFV is to have virtual machine image files (virtual appliances) of the network functions to load on a virtualization environment and create a network function. In simple terms, you will need to have an ISO file or OVF (Open Virtualization File) of a router or a firewall in order to spawn your new virtual router or firewall.

Several open source and commercial products exist on the market, and we will review some of them in this chapter. Below you can see some network functions that can be provided as NFV:

Virtual Router / Switch Virtual Firewall / IPS / IDS Virtual Load Balancer SD-WAN UC-IP Telephony
VyOS, open source router pfSense, open source firewall HAproxy, open source Silver Peak Virtual Unity Cisco CallManager
Vyatta, commercial router Juniper vSRX, commercial firewall F5 vLTM, commercial Riverbed SteelConnect Asterisk-based systems
Cisco Nexus 1000v, commercial Snort, open source IDS Loadbalancer.org, commercial Cisco Viptela Avaya
Cisco CSR Cisco ASAv Avi Networks, commercial VeloCloud Networks Skype for Business
VMware NSX VMware NSX VMware NSX Versa Freeswitch
anitsh commented 4 years ago

Knowledge Bridge

NFV is nothing but virtualizing appliances such as routers, firewalls, load balancers, IPS, WAF, SD-WAN and WAN optimizers. We used to have dedicated big boxes as firewalls in the datacenter, protecting traffic going and coming to the server workloads behind them. NFV virtualizes such boxes, allowing them to run on a virtualization platform. They still do the same job, only using virtual interfaces that are presented by virtualization hypervisors. Virtual interfaces can be connected to a specific VLAN or just connected to a shared physical interface.

You may ask yourself the following questions:

        Who will manage a router that is running on a server? Is it the networking team or the server and virtualization team? Networking people do not touch the servers, and server guys don’t know the network. Where is the demarcation point?
        How can I rely on a virtual machine to perform critical firewalling or termination of B2B IPsec tunnels?
        What can be done to recover a VFN if something goes wrong on the servers, its filesystem, etc.?

These are some of the things networking folks worry about when it comes to virtualization. However, you should remember that virtualization technology has been there for many years, it is matured and is running more mission critical applications and databases.

On the other hand, remember that NFV is not wiping out all firewalls and load balancers from datacenters. Organizations and datacenters will continue using their high performance physical appliances for aggregate protection services. The use of NFV is mainly for deploying VNFs for cloud tenants, micro-segmentation and microservices in a datacenter environment.

Tier 1 Service providers are working on their own uCPE solutions in order to replace the hardware CPEs. Replacing the CPEs with uCPEs doesn’t end when replacing the boxes, but requires a full infrastructure to manage the lifecycle, billing, OSS and BSS for the platform.

anitsh commented 4 years ago

Virtual Firewalls

The concept of virtual firewalls started a few years ago with the cloud trend. The introduction of IaaS (Infrastructure as a Service) and PaaS (Platform as a Service) created a new demand for service providers to provide a Firewall as a Service (FWaaS) concept to their cloud clients, in order to manage the security of their virtual infrastructure in the cloud.

Cloud providers' initial approach was to dedicate a physical firewall to each cloud client and allow the clients to manage their security using that physical firewall. However, deploying and supporting multiple physical firewalls was not an efficient solution. Slowly, security vendors started productizing and offering virtual firewalls or virtual appliances publicly, such as Juniper vSRX (Virtual SRX), FortiGate Virtual Appliances, Cisco ASAv (Virtual ASA), etc.

Virtual firewalls are now matured and are being deployed and used in enterprises and cloud environment. Since they are virtual, they can be deployed instantly and can be distributed anywhere in a datacenter's virtual environment. Microsegmentation is one of the key use cases of virtual firewalls in a datacenter to enhance the security of service or cloud tenants.

The following table illustrates a lot of virtual firewalls:

Vendor Name Commercial or Open Source
Juniper vSRX Commercial
Cisco ASAv Commercial
FortiGate Virtual Appliances Commercial
Sophos Virtual Appliance Commercial
VMware NSX Firewall Commercial
Stonegate (ForcePoint) Virtual Appliance Commercial
Rubicon Communications pfSense Open Source
Palo Alto Networks Virtual Appliance Commercial

Virtual firewalls can have multiple use cases, such as:

        uCPE (virtualized CPE)
        Cloud and datacenter
        Cloud firewalls (in cloud or in MPLS) for managed security providers.
anitsh commented 4 years ago

pfSense Open Source Virtual Firewall

pfSense is an open source stateful firewall distribution which is based on FreeBSD. It has a custom kernel for better support of packet processing and includes other software components for a full firewall functionality. pfSense started in 2004, as a fork of m0n0wall Project, which has since ended.

pfSense features are very competitive when compared to other commercial firewalls. pfSense has been deployed in many organizations. With thousands of deployments, pfSense is becoming the world’s most trusted open source network security solution. pfSense includes a built-in web GUI for managing and operating the firewall, which is based on industry-standard zone-based firewall concepts.

pfSense - Quick Summary Name pfSense
By Rubicon Communications, LLC (Netgate)
Where it runs On a dedicated hardware appliance or on a virtual machine
What it does pfSense is a stateful firewall with industry standard capabilities and features
Features Firewalling, logging, Layer 2 transparent firewalling, state table control, NAT, high availability clustering, multi-WAN load balancing, server load balancing, IPSec VPN, SSL VPN, PPPOE Server, reporting and graphs, captive portal, DHCP server
What it can do out-of-the-box You can install pfSense on a hypervisor, assign virtual interfaces, and start using it as a firewall. pfSense can be used as a virtual firewall in a microsegmentation environment, or can be used as CPE, or for NAT configuration.

pfSense can be used out of the box on a virtualization hypervisor such as KVM or VMware ESXi, or on virtualization management platforms such as OpenStack or VMware vSphere. In a cloud environment, pfSense can be used as a ready-to-go security NFV solution for cloud customers for automated deployment to protect the services.

Using SDN and service chaining, you can dynamically provision pfSense and apply required network policies to route traffic to and from the new pfSense virtual firewall. pfSense includes a PHP CLI, which can be used with automation tools to dynamically configure pfSense via templates. pfSense can also work with OSC (Open Security Controller) in order to spin up pfSense as a firewall. You can build a security manager to serve APIs to OSC and run a process to apply the required changes to the pfSense firewalls.

Video

anitsh commented 4 years ago

Snort Open Source Virtual IPS/IDS

Snort is an open source network Intrusion Prevention System (IPS) and network Intrusion Detection System (IDS) developed since 1998. It is now hosted and developed by Cisco (after it was acquired in 2013).

Snort works in three different modes:

        Sniffer
        In this mode, Snort displays packets on the console. This is useful for times when you need to verify specific tasks with devices behind Snort.
        Packet Logger
        Snort logs all the packets on the disk.
        IDS/IPS
        Snort analyzes the packets against a ruleset configuration, which is defined by the user. If Snort matches a packet with defined policies in the user configuration file, it will execute a pre-defined action. Snort actions include Drop, Alert, Response, Replace, Reject, etc.
Snort - Quick Summary Name Snort
By Currently hosted and developed by Cisco
Where it runs On a dedicated hardware appliance or on a virtual machine
What it does Snort is a network Intrusion Detection/Prevention System
Features Traffic logging, detecting and matching packet header information (L2-L7), finds patterns and executes actions such as Alert, Block, Replace, etc. Has flexible rules and policies
What it can do out-of-the-box You can install Snort on a virtual machine and have it connected to monitor and check the traffic of a network segment or even the traffic that is going to a specific host. Snort comes with a predefined attack signature database; you can register to receive the updated attack signature database regularly, which is a paid subscription service.

Snort analyzes the packets as byte streams. It tries to find a specific pattern in the packets, based on defined policies in the configuration. The P4 configuration is very similar to the Snort configuration. Snort can extract information from packet headers and runs the defined policies on the extracted packet information.

Snort can be deployed as a virtual machine in a virtualization environment and be served as a Virtual Network Function (VNF). Snort is compatible with most hypervisors, such as KVM and VMware ESXi, to run as a virtualized system. However, you need to be careful with Snort, as it may require a lot of processing power depending on the amount traffic on its interface, as well as the complexity of rules applied in the configuration.

You can use Snort as a distributed IDS/IPS in a cloud environment, to protect specific tenant traffic. Snort can also be used along with OSC (Open Security Controller). However, you need to build a security manager for Snort to allow OSC to inject the required configuration template to a Snort virtual machine that is created by OSC.

You may recall from previous chapters our discussions on DataPlane programming. Snort is a good example of a DataPlane software, which is running on CPU. You can build IPS/IDS systems using DataPlane programming on a SmartNIC or other chipsets that support the P4 language.

Below you can see a short Snort example:

The following rule will send an alert if a TCP packet from the 10.0.0.0/24 network goes to 192.168.3.5 on port 80:

alert tcp 10.0.0.0/24 any -> 192.168.3.5/32 80 (msg:”Hitting Honeypot”;)

The following rule will send an alert if it detects web activity between 10.0.0.0/24 and 192.168.3.5. (Handshake will be allowed):

alert tcp 10.0.0.0/24 any -> 192.168.3.5/32 80 (msg:” Hitting Honeypot WebApp”; classtype: web-application-activity; sid:800000; rev:1;)

The following rule will send an alert if it detects a pattern of “fibun.cgi?id=1122” in the web activity between 10.0.0.0/24 and 192.168.3.5. It skips the first 20 bytes of packets (offset) for quicker checking:

alert tcp 10.0.0.0/24 any -> 192.168.3.5/32 80 (msg:” Hitting Honeypot, matched pattern”; content:”fibun.cgi?id=1122”; nocase; offset:20; classtype: web-application-activity; sid:8000001; rev:1;)

anitsh commented 4 years ago

Virtual Load Balancers

Compared to firewalls, load balancers have a more simplified structure when it comes to their software architecture. The load balancers of the previous generation had complex hardware to cope with performance requirements. However, with enhancements of software and X86 processors, most load balancer companies started to build feature-rich software-based load balancers bundled with flexible ASIC-based packet processors.

Currently, there are multiple load balancers and Application Delivery Controller (ADC) products in the market. Most of them have an option to have their load balancer product as a virtual appliance, which can be loaded on a virtualization infrastructure such as VMware and OpenStack.

image

Let's review what the load balancer is. The load balancer is a networking device (appliance/software) that receives requests from users and spreads the requests to multiple application servers in a server farm or a server pool. You need to define the Virtual IP address (VIP) on load balancers, assign a protocol and port, and finally, assign a group of servers.

For example, if you have three web servers that are serving a common website, you can configure your load balancer to listen on a virtual IP address (i.e. 192.168.1.10 on port 80) and send the requests to three servers at different IP addresses.

Some load balancers have other features such as monitoring services on servers, SSL termination, Web Application Firewall, etc.

There are multiple methods to handle the return traffic (real server response) to the client. It can go from servers to the load balancer and let the load balancer to send it back to the client, or the server may use Direct Server Response (DSR) to send the response directly to the client, without having the load balancer in the path. DSR requires additional configuration on real servers to use VIP as the source IP address for return traffic.

Virtual load balancers are a great choice for datacenters and cloud environments. In the past, datacenters used to have only a pair or multiple pairs of central load balancers. With the trend of Network Function Virtualization and availability of virtual firewalls, it is becoming common to deploy virtual load balancers for microservices.

Most virtual load balancers have integration capabilities with cloud and virtualization environments. For example, if the load balancer detects high loads on its server farm, it can send an API call to the virtualization management server (i.e. OpenStack or VMware vCenter) to create and start an additional virtual server to join the pool in order to reduce the load on servers and increase the performance and response time.

The following table illustrates some of the virtual load balancers available on the market:

Vendor Name Commercial or Open Source
F5 Networks Virtual LTM Commercial
Citrix Virtual Load Balancer Commercial
Avi Networks Virtual Load Balancer Commercial
Barracuda Virtual Load Balancer Commercial
Kemp Kemp Virtual Commercial
Fortigate Virtual Load Balancer Commercial
HAproxy Technologies HA Proxy Open Source
Facebook Katran Open Source
anitsh commented 4 years ago

Katran Open Source Load Balancer

Katran is a new open source load balancer contributed by Facebook. Katran is a high performance load balancer based on XDP (eXpress Data Path) and eBPF (Enhanced Berkeley Packet Filter) engines.

Katran - Quick Summary Name Katran
By Facebook
Where it runs On dedicated virtual machines
What it does Hy performance load balancing
Features Open source, fast (especially with XDP in the driver mode), performance scales linearly with a number of NIC RX queues, RSS (Received Side Scaling) friendly encapsulation.
What it can do out-of-the-box You can use it for load balancing in high volume environments

Some of Katran's features:

        Robust - Katran uses XDP for packet forwarding.
        XDP invokes BPF on every packet received from a NIC Receive Queue (RX). Using network cards with multiple Receive queues will help Katran scale out, as it runs independent BPF instances.
        RSS-friendly, uses IP-in-IP encapsulation for packet forwarding from L4 load balancer to L7 load balancer.
        Katran, and XDP in general, allows you to run any application without any performance penalties on the same server.

image

Steps (Shirokov, N. V.):

    Katran receives packet.
    Checks if the destination of the packet is configured as a VIP (virtual IP address - IP address of the service).
    For an incoming packet toward a VIP - Katran is checking if it saw packet from the same session before, and if it has - it sends the packet to the same real (actual server/l7 lb which then processes/terminates the TCP session).
    If it's a new session - from 5 tuples in the packet, calculate a hash value.
    Using this hash value - pick a real server.
    Update session table with this lookup information so that Katran can simply lookup this information for the next packet in the session and not calculate the hash again.
    Encapsulate packet in another IP packet and send to the real.

Katran uses XDP and eBPF. XDP provides a fast, programmable network data path without resorting to a full-fledged kernel bypass method and works in conjunction with the Linux networking stack. The eBPF virtual machine provides a flexible, efficient, and more reliable way to interact with the Linux kernel and to extend its functionality by running user-space supplied programs at specific points in the kernel.

Katran Architecture image

anitsh commented 4 years ago

HAproxy Open Source Virtual Load Balancer

HAProxy is a free, open source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spreads requests across multiple servers. It is written in C and has a reputation for being fast and efficient (in terms of processor and memory usage).

HAProxy - Quick Summary Name HAProxy
By HAPROXY Community
Where it runs On a host
What it does High performance L4-L7 load balancing
Features L4-L7 load balancing, SSL
What it can do out-of-the-box High performance load balancing, used in high profile websites such as GitHub, Vimeo, Stack Overflow, etc.

HAProxy is particularly suited for very high traffic websites and powers quite a number of the world's most visited ones. Over the years, it has become the de facto standard open source load balancer. It is now shipped with most mainstream Linux distributions, and is often deployed by default in cloud platforms.

The HAProxy community supports the following features:

        Load balancing
        SSL
        Service monitoring
        Proxying
        HTTP rewrite
        Logging and statistics.

HAProxy also provides a commercial enterprise version, which includes extra modules, such as DDOS protection, Sanitize, WAF, Real Time Dashboard, and support.

HAProxy can be integrated with orchestration tools such as ONAP for automated deployment and provisioning of virtual load balancers in a network.

Video

anitsh commented 4 years ago

Virtual Routers

Routers are a perfect candidate for virtualization. They have a small number of interfaces and the IP routing software stack is well matured. Routing software has not changed for many years; the same protocols are still running and connecting the networks and building the Internet. Apart from using router appliances from networking vendors, many organizations have been building their own high performance routers using physical servers with few network interface cards running Linux and an open source routing stack such as Quagga, Zebra, XORP, etc. In recent years, commercial vendors also started offering virtual routers, fully supported on virtualization platforms.

The following table illustrates some of virtual routers currently available:

Vendor Name Commercial or Open Source
Cisco CSR (Cloud Service Router) Commercial
Cisco ISRv (Integrated Services Virtual Router) Commercial
Juniper vMX Commercial
Brocade (acquired) Vyatta Commercial
Alcatel Lucent VSR Commercial
VMware NSX Commercial
Cloud Router Cloud Router Open Source
VyOS VyOS Open Source
Quagga Linux Router (Quagga) Open Source

Virtual routers can be used by:

        Service Providers
        - Using virtual routers in uCPE
        - Using virtual routers in service provider's transit path.
        Cloud and Datacenter Providers
        - Using virtual routers for routing between tenant networks
        - Using virtual routers for microsegmentation.

Virtual routers come mostly as a virtual appliance (a hardened bundle of kernel, OS, and routing software), which can be loaded on a virtualization platform. Most virtual routers support multiple interfaces; you can attach a virtual router to multiple interfaces, to the host interfaces, or to the host's virtual switch.

anitsh commented 4 years ago

VyOS Open Source Virtual Router/Firewall

According to VyOS website,

"VyOS is an open source Linux networking distribution that can be installed on a physical hardware or a virtual machine, on your own server, or on a cloud platform".

It is based on GNU/Linux and is a tight integration of multiple networking applications such as Quagga, ISC DHCPD, OpenVPN, StrongSWAN, etc., under a single management interface. Operation and management of VyOS,

"is more similar to traditional hardware routers, with a focus on comprehensive support for advanced routing features such as dynamic routing protocols and command line interface".

VyOS - Quick Summary Name VyOS
By Open Source
Where it runs As a virtual appliance or on a x86 server
What it does Routing and firewalling
Features Layer 2, VLANs, 802.1q, QinQ, Layer 3, BGP, OSPF, RIP, PBR, ECMP, zone-based firewalling, tunneling, PPPOE, GRE, L2TP, VXLAN, IPSec VPN, SSL VPN, NAT, DHCP server, VRRP, sFlow, web proxy, QoS and traffic shaping. Uses a CLI for configuration without GUI
What it can do out-of-the-box You can just load VyOS on a virtual machine and use it as a router to connect to an ISP or route between networks, or use it as VPN server or a firewall within your network.

To install VyOS, you need to download its Live CD installer from its website; it requires a minimum of 512MB of RAM and 2GB of storage, which makes it a good choice to work as a virtualized router.

Similar to the industry standard networking CLI, the VyOS CLI also includes an operational mode and a configuration mode. The CLI has built-in help (using the “?” question mark) and includes the [tab] command completion. The configuration mode of VyOS is legacy; however, you can automate the configuration using network automation tools and NETCONF. VyOS also provides a shell API, as well as a Perl API library to fully control the configuration.

You can use VyOS as a virtual router/firewall, VPN concentrator, or for other functions. VyOS is very handy, works like a Swiss army knife. Cloud providers or service providers can use VyOS as a virtual router in their environment for different purposes. ONAP can be configured to provision and set up VyOS virtual routers in the network, and inject the required configuration.

Video

anitsh commented 4 years ago

Service Chaining

In order to introduce any new network service (such as a load balancer, firewall, WAF, etc.) without service chaining, you need to build the entire design and topology to support this new scenario, and introduce a new HOP in the traffic path. You would need to prepare a design or documentation for implementing the:

        Multiple VLANs on different switches
        Spanning tree
        Next hop redundancy protocols (such as VRRP)
        Policy-based routing
        Static routing
        Dynamic routing protocol advertisement, etc.

This is a complex process, and usually takes a long time to implement and test (may take weeks or months).

With the new DevOps, Continuous Integration (CI) and Continuous Delivery (CD) concepts, the speed of building applications, delivery to production, as well as decommissioning and sunsetting of applications has been increased. This generates more and more demand for networking people to create networks supporting these new requirements.

Nowadays, such requirements are becoming very frequent in most organizations. With the growth of server and storage virtualization, developers can build an infrastructure for development-testing or for creating a production application. Business demands new services, new websites, online portals, and so on; they simply can't wait for the network and security teams to design and implement each requirement.

Service chaining helps to solve such problems, simplifies the design and decreases the time required to provision such services, or even the more complex ones. Service chaining is an SDN and network controller feature that can be used to dynamically divert traffic to a specific NFV virtual appliance.

image

In an SDN-enabled network with an SDN controller (i.e. OpenDayLight), the SDN controller programs the flows in the network. We do not need to manually build a complex VLAN setup or a complex policy-based routing configuration to route the traffic from the user to the servers. Instead, the SDN controller will program the network based on the service chains that are defined by the user. The SDN controller can program the fabric (virtual and physical switches). It means that, for example, you can define and build a topology in the SDN controller to match the packets that are originated from the external world and destined to a specific IP address, to be sent to an IPS, and then to a load balancer.

Service chaining can be integrated within the ONAP orchestrator. ONAP can execute the whole end-to-end process of creating and provisioning a virtual firewall or load balancer (NFV) in virtual infrastructures, and execute the tasks required for creating policies in the network to route traffic to that newly created Virtual Network Function (VNF). NFV and Service Chaining in a Service Provider Network image

Service chaining describes the process of hops that a specific packet must pass through:

        It matches all or very specific traffic/packets to go inside a specific network function or a series of network functions.
        It supports dynamic insertion of service functions.
        It decouples network topology and service functions.
        It creates a common model for all types of services (for example, a model of firewall and load balancer, or a model of firewall and WAF).
        It allows network service functions to share information with each other.

OpenDaylight (open source SDN controller) has a service-chaining module. This module is designed to handle service chaining for applications and includes the following components:

        Classifier
        This determines what traffic needs to be chained, based on a match policy.
        Service chain
        This refers to the list of network services that the matched packets need to traverse.
        Service path
        This refers to the actual instances of services traversed.
        Service overlay
        This is a topology that is created to visualize a service path.
        Metadata
        This refers to the information passed between participating services.
anitsh commented 4 years ago

uCPE (Universal Customer Premises Equipment)

In service provider networks, the use of NFV is a bit different from that of enterprises and datacenters. Such service providers have their networks divided into customer, provider, and edges. This is illustrated in the following diagram: image

uCPE (Universal Customer Premises Equipment) In service provider networks, the use of NFV is a bit different from that of enterprises and datacenters. Such service providers have their networks divided into customer, provider, and edges. This is illustrated in the following diagram: image

Service providers must provide the Customer Premises Equipment (CPE) device to all of their clients. This device is normally a hardware appliance, a router with limited performance, based on the customer's site requirement.

In many instances, customers may need to change their CPE or add extra features, such as a managed firewall or managed WAN optimizers, to their network. This needs to be executed by the service provider; it is the service provider’s duty to provide, install, and commission extra equipment to the clients.

The process of any alteration to CPE is lengthy. On the other hand, the cost of hardware appliances, licensing, managed services, and support makes it more complex for the service provider to cope with the estate of CPE of many customers.

Service providers started realizing the benefits of NFV when they started offering it to their clients. One of the main use cases of NFV is Virtual Customer Premises Equipment (vCPE). Service providers started deploying standard x86 compute servers to their clients as CPE. Yes, they also started realizing that, by deploying two x86 servers at customer sites and running a hypervisor, they will be able to manage all the requirements of the clients without having to make any changes to the hardware, or even visit the site.

The x86 server hosts a virtual router. It can also host other network functions, such as Software-Defined WAN (SD-WAN), WAN optimizers, Internet routers (for local Internet connection or breakouts), and firewalls.

Service providers have also built their own virtual provisioning tools to manage the whole estate of virtual services. In recent years, many of them have built orchestration platforms to automate the service provisioning for their clients.

We can categorize NFV features as follows:

        Virtual routers
        These are standard packet-forwarding systems. They can route and run routing protocols and other features, such as NAT, Policy-Based Routing, and so on.
        Virtual firewalls
        These are standard stateful or stateless firewalls with L3-L7 filtering capabilities. They may be equipped with deep packet inspection engines to provide features such as IDS and IPS.
        Virtual load balancers
        L4 to L7 load balancers with capability of hosting virtual IPs (VIP) and forwarding (and NAT) the traffic to real servers. They may be equipped with Web Application Firewall (WAF) features.
        Virtual WAN optimizers
        These include caching, TCP optimization, and protocol acceleration.
        SD-WAN routers
        They are used to logically bound multiple WAN and Internet connections and build VPN tunnels back to the SD-WAN head-end units in data centers. They are used for intelligent link measurement and application-based routing.
anitsh commented 4 years ago

Learning Objectives (Review)

You should now be able to:

        Explore Network Function Virtualization, or NFV.
        Discuss about virtual firewalls, routers and load balancers, and how they are used.
anitsh commented 4 years ago

Summary

In this chapter, we discussed about the network function virtualization technology, and the available products and services in the market. NFV does not end with virtual firewalls or load balancers, but it is extended to other networking functions such as DNS, DHCP, DDOS protection, tunnels, etc.

You may think that NFV is just the virtualization of a networking device, to have the networking function run on an x86 server instead of its dedicated hardware appliance. That is true. However, the real value of NFV is its flexibility, and its programmability that allows an orchestration tool such as ONAP to manage the lifecycle of VFN. ONAP and other orchestration platforms should be able to spawn a VNF anywhere in the virtualization infrastructure and apply the required policies on a VNF to function properly.

Now you know there are multiple open source networking tools available, and they can all be used for free, and, more importantly, you can make any changes to the source code to cope with your designs and required functions.

When it comes to virtual network function, a commercial virtual router has no extra benefit to an open source virtual router, as both will run on a common hardware, with no extra performance. They both rely on a virtualization hypervisor to allocate resources or provide them direct network connectivity via SR-IOV (Single Root I/O Virtualization). In the VNF world, everything is in the software, there is no real hardware-accelerated performance unless the VNF has specific plugins to use the SmartNICs or other hardware-accelerated resources.

Network and telecom service providers are becoming one of the main consumers of NFV, as deploying VFNs will have multiple impacts on their operations, such as cost reduction, flexibility, additional revenue, additional services, etc. They use NFV to transform the costly CPE devices which reach their end of life every 5 to 10 years with standard, high performance x86 hardware to run all the networking and security functions as a virtualized platform. Managing VNFs on such uCPE is only possible via an orchestration platform which can manage all the VNFs without a need to perform any manual change.