• No results found

1.6 Dissertation Structure

2.1.1 Cloud computing

In 2001, cloud computing was gaining attention. VMware was the first company which fully commercialised server virtualisation, where they enabled multiple op-erating systems to run on one physical server [11]. The virtualisation technology solved the problem of over-provisioned servers. For example, in 2002, Amazon claimed that they were only using 10% of their compute resources [12]. Their cloud computing infrastructure model [13] solved this problem and it also enabled them to sell and provision on-demand virtual servers from a webpage.

Since then, the concept of cloud computing has evolved to include more than pro-visioning virtual servers with operating systems. However, the term cloud comput-ing is still associated with on-demand provisioncomput-ing of the server-side services [14].

Nowadays, cloud computing is a paradigm and a vision of computing as a utility, which makes server-side software available as on-demand services. Cloud com-puting is, therefore, mostly associated as a platform for operators and developers with innovative ideas. Instantiating a service in the cloud does no longer require a large capital outlay, personnel expenses to operate it or long provisioning times.

Innovative companies can get their server-side services up and running quickly without concerning about scaling or costs.

From a service perspective, cloud computing has traditionally used three main categories of services [15]:

IaaS (Infrastructure-as-a-Service) is a concept which historically provides access to hardware, storage, servers and data centre space or network com-ponents. However, a common way of defining this service is that it gives the consumer access to the hypervisor on physical servers. This hypervisor access can either be shared with others (tenants) or the consumer could have exclusive rights to the hardware.

PaaS (Platform-as-a-Service) is a service where the provider operates the operating system and most frequently also the server-side application. In PaaS services, the consumer is provided access to a software platform where software code or a software configuration can be deployed on the service.

This is most frequently associated with enabling development frameworks such as .NET, Ruby or PHP for software developers, where the developers easily can deploy their code.

SaaS (Software-as-a-Service) simply provides a software application for the

2.1. Virtualisation technologies 15 consumer. The consumers do not have access to the operating system or the software application running on the server. The consumer only consumes the application such as Google Apps, Salesforce or Dropbox.

From the service producer perspective, these different types of services require automated provisioning of the underlying compute, storage and network resources.

Full automation and orchestration of these services are referred to as a Software-Defined Data Centre (SDDC). Orchestrating these underlying services has histor-ically been very complex. Hence, the concept of Hyper-Converged Infrastructures (HCI) emerged in 2012 [16], based on the concept of putting storage, network and compute into one physical device (i.e. Nutanix [17] or OpenStack [18]). This en-ables data centre operators to run all resources in software and scale the data centre resources quickly by simply adding or removing HCI servers.

In this research, the focus in particular put on the networking part of this provision-ing. In an IaaS platform, the consumer is often given the opportunity to set up and configure the network between the server-side services themselves. On the other hand, for PaaS infrastructures, the service producer controls the network. These two distinct services set a clear differentiation of where the responsibility of the network interconnections relies and consequently also the placement of network security control.

Since 2014, cloud security has become a fast-growing service and now provides a security protection level which is comparable to traditional IT security systems.

This includes the protection of critical information such as data-leakage and acci-dental deletion of data and services. However, security, in general, is still a primary concern for operators and enterprises which move their services to the cloud [19].

The latest security challenge for enterprise operators is multi-cloud operations.

The concept of multi-cloud implies that an operator or an enterprise uses multiple cloud services in a single heterogeneous architecture. Multi-clouds differs from the concept of hybrid clouds. Hybrid clouds typically integrate a similar deployment model (i.e. IaaS) to both a public infrastructure (i.e. Amazon AWS) and a private infrastructure (i.e. VMWare NSX). A multi-cloud infrastructure implies the use of multiple public clouds (i.e. Amazon AWS, Microsoft Azure) and multiple private clouds with a combination of different deployment models (Iaas, Paas, SaaS). One of the main objectives of using a multi-cloud environment is that there is a root system (inter-cloud) that controls all the other clouds in order to enable one-stop shopping. Then, the consumer can control the costs and control the operations from one single point.

Multi-clouds bring new security challenges to the table, such as data locality and data access. However, first and foremost, the main challenge is network security.

This includes firewalling each service (Section: 2.1.1), provide network encryp-tion across multiple domains and operate different types of access networks for different kind of services. This introduces a need for configuring the network ser-vices dynamically both within a data centre cloud and across multiple data centre clouds.

Micro-segmentation and segregation

Network segmentation has historically been a ground pillar in network design.

First of all, it is the foundation of IP subnetting, routing and network efficiency.

However, the concept is also important with respect to network security. In a se-curity context, segmentation also called "firewall zoning", is a method of dividing a network into different security zones with different access levels. Historically, a good security practise has been to have many security zones with small subnets on the firewall. However, this has always been a compromise between available resources and the level of security.

Micro-segmentation consists of segmenting the security zones down to very small parts, preferably one host per segment. This is achieved by virtually distributing the firewall rules to every access port on physical or virtual switches. SDN is a technology that enables such infrastructure (Section:2.1.2). However, the concept originated from cloud computing. Here, the distributed virtual firewall is asso-ciated with attaching a virtual firewall to each interface of the virtual machines, which abstracts the firewall concept from the underlying SDN technology.

In cloud computing, this concept has resulted in a new security paradigm and new way in perceiving network security. Abstraction layers, virtualisation technologies and centralised control enables automation of security policies in a new way. His-torically, firewall operators created firewall rules based on "zoning interfaces" and packet header attributes only. Now, managing a micro-segmented infrastructure enables operators to deploy firewall rules on abstract policies such as the name of the virtual machine. Hence, the security policies are easier to group and manage.

Another advantage of micro-segmentation is that there is no longer need for tenant segregation (i.e. MPLS VPNs or 802.1q VLANs). This is because the segrega-tion has implicitly been enforced by overlay networks and the distributed firewall paradigm. In a networking perspective, this has resulted in network designs which depreciate small IP segments and moves towards designing networks in large layer two domains in the data centres.

Correspondingly, the term micro-segmentation has evolved into something which includes more than segmenting a network into smaller parts. It includes devel-oping and enforcing rulesets for controlling the communications between specific

2.1. Virtualisation technologies 17 services or hosts. In extent, it is also a matter of definition if the isolation of the network traffic in micro-segmented networks also should include encryption.

Since the nineties, encrypted channels by the Internet Protocol Security (IPSec) [20] has been the primary technology of setting up secure channels between net-work equipment. Firewall operators have for many years used IPsec between the firewall and other services in order to achieve packet confidentiality and integ-rity. The underlying SDN technology which enables distributed firewalling lacks this encryption feature. It requires encryption and keying capabilities in the virtual switches. This has created a research gap in both cloud computing and NFV, where the packet confidentiality inside the data centres and between tenants requires at-tention.

Currently, the packet confidentiality is protected by outer encryption channels between data centres by the use of site-to-site IPsec channels. However, when shar-ing services across data centres, it is not only the data centre that must be protected from the outside world. Internally, both the users and the services, require packet isolation and confidentiality. This calls for an encrypted channel per service. Op-erating micro-segmentation across multiple service providers and multiple IaaSs are challenging if two IaaS platforms run different network technologies. There are primarily two approaches to solve this problem. (1) Having a top-level orches-trator which manages the underlying infrastructures or (2) using a network overlay that federates the network across the IaaS domains. However, combing different types of cloud services together with different underlying network technologies meet challenges such as migration of services, network control, network topology changes and network isolation. One network paradigm that aims to solve these concerns is Software-Defined Networks.