• No results found

Network Function Virtualization (NFV)

1.4 List of Publications

2.1.1 Network Function Virtualization (NFV)

NFV decouples the software implementation of NFs from their specialized ded-icated hardware and runs the software on virtualized environment (e.g., virtual machines (VMs) or containers). Figure2.1shows NF instances running on virtual-ized environments that are created using different virtualization technologies. The VMs or contrainers running on the same server are interconnected using vSwitches like Open vSwitch (OvS) [7], VALE [102]. The introduction of high performance packet I/O libraries such as DPDK [2], netmap [101] have enabled the software

11

12 Background and Related Work

Server Host OS Hypervisor Guest

OS Libraries &

runtime NF

Guest OS Libraries &

runtime NF

(a)Virtual machine

Server Host OS Libraries & runtime

NF NF

(b)Container

Server Host OS Hypervisor Libraries &

runtime NF

Libraries &

runtime NF

(c)Unikernel Figure 2.1:NF instances running on different virtualizaion technologies [9].

based vSwitches to process packets at the line rate of 10Gbps or higher. Various software dataplane frameworks have been proposed for NFV, based on different virtualization technology and packet I/O [93].

The virtualized environment considered first to be used in NFV is VM [23]. VMs can be created using hypervisor technologies such as KVM [4], Xen [11], VM-ware. The hypervisors provide isolation between the VMs running on the same server. Thus, the failure of a VM will not affect the other VMs running on the server. Dataplane frameworks that are based on VM include softNIC [54] and NetVM [63], which utilize KVM for virtualization and Intel DPDK for packet I/O.

The disadvantage of using VMs to run network functions is their overhead. VMs typically consume large amount of memory in the order of hundreds of megabytes to several gigabytes as they require their own guest OS, so the instantiation of a VM takes time in the order of seconds [79].

Considering this, the usage of container technologies such as dockers [1] and Linux containers [5] is gaining momentum. Containers consume fewer resources (a few MBs or tens of MBs) than virtual machines since they do not include their own operating system, instead relying on the host’s kernel [122]. Frameworks that are based on containers include OpenNetVM [122], GNF [29], Flurries [121].

However, containers do not have a strong isolation mechanism making them the target of an ever increasing number of exploits [79]. In addition, any container that can monopolize or exhaust system resources (e.g., memory) will cause a DoS attack on all other containers on that host [79].

The other virutalization technology being considered is unikernel, which is a light-weight or minimalistic virtual machine that provides the mimimum set of libraries required for running a specific application [79]. Unikernel aims to incorporate the high security feature of VM while having an efficiency close to that of containers.

ClickOS [82] is an NFV dataplane framework that is based on unikernel. ClickOS runs lightweight virtual machines using Xen hypervisor and mini-OS, customized

2.1. Background 13 for network packet processing. It uses netmap [101] and the VALE switch [102] to efficiently move packets between the lightweigth VMs. The downside of ClickOS is its limited flexibility as NFs must be designed within the Click framework’s specification and do not run within a standard Linux environment [122].

Virtual Computing

Virtualisation Layer Virtual

Storage Virtual Network

Computing

Hardware Storage

Hardware Network Hardware NFVI

OSS/BSS

Service, VNF and Infrastructure Description

EMS 1 EMS 2 EMS 3

VNF 1 VNF 2 VNF 3

Orchestrator

VNF Manager(s)

Virtual Infrastructure

Manager(s)

Or-Vi Or-Vnfm

Vi-Vnfm Os-Ma

Se-Ma

Ve-Vnfm

Nf-Vi Vn-Nf

NFV Management and Orchestration

Execution reference

points Execution reference

points

Execution reference points

VI-Ha Hardware resources

Figure 2.2:NFV reference architectural framework [39].

NFV ETSI Architecture

The European Telecommunications Standards Institute (ETSI) is the standardiza-tion body for NFV and defines a reference architecture of an NFV platform [39], which is shown in Fig.2.2. On a high-level, the architecture consists of three main components: NFV Infrastructure (NFVI), Virtualised Network Functions (VNFs) and NFV Management and Orchestration (MANO) [39]. The components are de-scribed briefly below.

NFV Infrastructure (NFVI):NFVI contains all the hardware and software com-ponents required for building up the virtualized environment in which NF instances are deployed. This includes the physical resources (such as compute, network, storage), which can span across several locations, virtualization layer and virtual resources.

Virtualised Network Functions (VNFs): Virtual Network Functions (abbrevi-ated as VNFs, or NFs) are the software components that implement the network

14 Background and Related Work

functions, which will run on a virtualized environment created by the adopted vir-tualization technology.

NFV Management and Orchestration (MANO): MANO is responsible for the overall control and management of an NFV-enabled network. The MANO is made of three modules each of which has a specific functionality [98]. These include Virtual Infrastructure Management (VIM), Virtual Network Function Manager (VNFM) and NFV Orchestrator (NFVO).

NFV Orchestrator: The NFV Orchestrator is responsible for the creation and man-agement of end-to-end services. It has two main components, resource orchestra-tion and service orchestraorchestra-tion. The resource orchestraorchestra-tion is purposed at support-ing the service delivery by managsupport-ing NFVI resources, which in turn are controlled by one or more VIMs. While service orchestration is targeted at life cycle man-agement of network services.

Virtual Network Function Manager (VNFM): The VNFM is responsible for the lifecycle management of NF instances. The specific functions of VNFM include NF instance creation, modification, scaling out/in and up/down, NF configuration (if required), performance and fault management. Each NF instance needs to have a VNFM, which may also manage other NF instances as well.

Virtual Infrastructure Management (VIM): The Virtual Infrastructure Management (VIM) is responsible for managing the resources in the NFVI, including physical resources (compute, storage and network), virtual resources (VMs) and software resources (hypervisors), which are usually within one operator’s infrastructure do-main [98].