• No results found

Virtualization and cloud computing

A. Authentication and key agreement protocol (AKA)

2.3. Virtualization and cloud computing

A key concept for the progression towards 5G networks are the cloud computing and virtualization paradigms. Virtualization offers many advantages in the present, which are being exploited for the benefit of developing the 5G evolution initiative. In other words, the existing hardware deployed to serve the 4G LTE and LTE-Advanced infrastructures is being emulated into software and virtualized, i.e. adapted to operate on a generic computing machine (a PC or a server). From there, a congregated research is being carried out in order to achieve automation of deployment and portability of emulated mobile network platforms. As an attempt to move the manual configuration into automated solution, the networking industry formulates the concepts of network virtualization (NV), network function virtualization (NFV) and

other and can be implemented individually, without the impairment of their function. Namely, virtualization refers to the “process of abstracting computing resources such that multiple applications can share a single physical hardware” (VAEZI, Mojtaba and Zhang, Ying, 2017).

As denoted, the virtualization refers mostly to server virtualization, where a particular physical server has an abstraction formed and is decomposed into virtual entities. The virtual constituents are assembled into a hypervisor which is in fact the virtualization software (like KVM, VirtualBox or VMware). The virtual constituents are in actual fact a virtual CPU, virtual RAM, and virtual NIC etc. Besides the represented entities, the storage can also be virtualized. This allows alleviated sharing of resources between users. Subsequently, a network can be virtualized as well, which encompasses creating virtual links, subnetworks, gateways and layer-2 bridges, etc. Since the server virtualization exists for an extensive period of time, numerous virtualization software is available. There are some major benefits the virtualization has brought into the world of computation management. With the improvement of availability, the servers are more user-friendly and available to supply bigger number of consumers efficiently. The users can create virtual machines and migrate the operations they perform in the form of images and run the same image in another environment. This inclines on the fact that virtualization also improves mobility, which is a very important factor. Another improvement is the improvement in the efficiency if exploitation of the hardware.

A single virtual machine performs segmentation and is able to run a distinct operating system than the one at which the virtualization software is running. This allows the users to execute different software on different platforms, and at the same time distributing the resources of the physical machine more efficiently.

Additionally, storage aggregation augments the global manageability of storage and delivers improved distribution of storage resources. At the same time, the capability of backup in the virtual environment is a big advantage. In case of failure, the servers can be configured to automatically migrate the data to another machine, without compromising the work they perform at the given moment, which in fact will also prevent data loss (VAEZI, Mojtaba and Zhang, Ying, 2017).

2.3.1. OpenStack cloud platform

Cloud computing has attracted considerable attention over the past few years. It offers the possibility to move an infrastructure to a platform where the requirement for hardware is no longer obligatory, but instead invest for uptime. With an interface that enables increasing and decreasing the number of virtual machines in a cloud, one builds a cluster that can adapt the number of servers to actual user demand, thereby both decreasing cost and evading saturated servers. A dedicated virtual machine (VM) model is not working when it comes to compute-intensive applications. Yet while considering Docker containers is a feasible idea, avoiding "noisy neighbor" problems that are common on shared infrastructure with SaaS offerings and performance problems for stateful applications like databases, is very desirable. OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds. Backed by some of the biggest companies in software development and hosting, as well as thousands of individual community members, many regard OpenStack as the future of cloud computing. OpenStack is managed by the OpenStack Foundation, a non-profit organization that practices both development and community-building around the project. OpenStack allows users to deploy virtual machines and other instances that handle different tasks for managing a cloud environment, continuously. The horizontal scaling is eased, which means that tasks that benefit from running concurrently can easily serve more or fewer users simultaneously by just running up more instances. For example, a mobile application that needs to communicate with a remote server might be able to divide the work of communicating with each user across many different instances, all communicating with one another but scaling quickly and easily as the application gains more users. And most importantly, it is open source software, which means that anyone who chooses to can access the source code, make any changes or modifications they need, and freely share the changes back out to the community at large. It also means that OpenStack has the benefit of thousands

of developers all over the world working in tandem to develop the strongest, most robust, and most secure product that they can (OPENSTACK, 2017).

The cloud provides computing for end users in a remote environment, where the actual software runs as a service on reliable and scalable servers rather than on each end-user’s computer. Cloud computing can refer to a lot of different entities, but typically the industry discusses about running different items "as a service" - software, platforms, and infrastructure. OpenStack falls into the latter category and is considered Infrastructure as a Service (IaaS). Providing infrastructure means that OpenStack makes it easy for users to quickly add new instance, upon which other cloud components can run. Typically, the infrastructure then runs a "platform" upon which a developer can create software applications that are delivered to the end users. OpenStack is comprised of many different dynamic parts. Because of its open nature, anyone can add additional components to help it meet the demands. One of the advantages that OpenStack brings is that it helps prevent vendor lock-in to the underlying software and hardware. This is made possible by managing the resources through OpenStack, instead of using the vendor’s part directly. This means that a vendor’s component can potentially be replaced with another vendor’s easily. The drawback of this approach is that OpenStack only supports common required features for all kinds of supported modules and may miss some features specific to a vendor’s constituents. On the other hand, it should not go unnoticed that, due to the lack of an accepted standard for cloud platforms, using OpenStack implies a type of lock-in to OpenStack itself, with no guarantee of portability to a different cloud framework (OPENSTACK, 2017).

However, disregarding the type of cloud infrastructure employed, there are several repercussions that need to be addressed when it comes to implementation of the future 5G networks. Since the main goals of 5G are to improve capacity, reliability and energy efficiency, while reducing latency and massively increasing connection density; a crucial part of 5G is the empowerment of real-time application support.

The given applications such as self-driving cars, robotics, medical appliances or online-gaming, require as lower network latency as possible. In the case of the present cloud technologies, the orientation towards latency minimization is instead diverted to providing service reliability and robustness. The focus of next generation mobile communication is to provide seamless communication for machines and devices building the Internet-of-Things (IoT) along with personal communication. New applications such as tactile Internet, high-resolution video streaming, tele-medicine, tele-surgery, smart transportation, and real-time control dictate new specifications for throughput, reliability, end-to-end (E2E) latency, and network robustness. Additionally, intermittent or always-on type connectivity is required for machine-type communication (MTC) serving diverse applications including sensing and monitoring, autonomous cars, smart homes, moving robots and manufacturing industries. Several emerging technologies including wearable devices, virtual/augmented reality, and full immersive experience (3D) are shaping the demeanor of human end users, and they have special requirements for user satisfaction. Therefore, these use cases of the next generation network push the specifications of 5G in multiple aspects such as data rate, latency, reliability, device/network energy efficiency, traffic volume density, mobility, and connection density.

Current fourth generation (4G) networks are not capable of fulfilling all the technical requirements for these services (PARVEZ, I. et al., 2017).

One secret behind the manipulation with latency and service reliability is the situating of the core network and the way it is accessed by the eNB. Although, there are major advancements in the radio-access

(Non-Access Stratum). This concept is known as Virtualized cloud Radio Access Network (RAN). A C-RAN over passive optical network (PON) architecture is introduced called virtualized-CC-RAN (V-CC-RAN), which can dynamically associate any radio unit (RU) to any digital unit (DU) so that several RUs can be coordinated by the same DU, and the concept of virtualized BS (V-BS) that can jointly transmit common signals from multiple RUs to a user. This concept of splitting the core network (CN) into multiple entities allow for greater granular control and flexibility of computation resources placement and scaling (PARVEZ, I. et al., 2017). Given that the splits are deployed in a distributed cloud, the computational units for the eNB should be executed in the vicinity of the cloudified mobile core network. This will allow a direct communication between the eNB and the core network, specifically referred to as Edge computing (FARRIS, I. et al., 2017, pp.1-13).

2.4. Multi-platform containers and their role in service deployment and