• No results found

9.5 Protocol and Interfaces

9.5.2 Control Plane Interfaces

The services on the control plane are defined in Figure9.10. This section explains the interfaces between these control plane services.

Control Plane Application Interfaces

The control plane application is the main application in the architecture, but, from a service interface perspective, it only has two main interfaces towards the orches-tration layer. Interfaces that the control plane application implements are perceived as interfaces held by other services, explained in the following sections. The main interfaces for the control plane application are:

- An NSD interface for incoming requests from the orchestration plane. This includes the NSDs for SFC, the VNFs and the Transport Links.

- A service capability interface to get information about the Transport Links to inform the orchestration layer whether the Transport Links exist and how they are established. This service is reflected from the orchestration plane and proxies the BGP route table to the orchestration plane as an NSD.

9.5. Protocol and Interfaces 149

Tier 1 Interfaces

The Tier 1 BGP service sends BGP messages to other network controllers, while the control agent listens for service request for Transport Link maintenance (create, delete, get, modify). Additionally, the control plane application and KMS server reads the announced BGP messages, which means that the Tier 1 agents have three interfaces.

- A BGP speaker service running on the network controller. The BGP messages consist of two new address families. The new address families are reflected by the announcement of the network controllers and the announcement of the Transport Links (Figure9.17). The address families are defined as Net-work Controller routes (NCR) and Transport Link routes (TR). These BGP messages are distributed globally.

- A configuration interface to inject new Tier 1 routes. The Tier 1 control agent receives a “create Transport Link” message from the controller application, and it injects a Transport Link route into BGP.

- A Get-Capability interface, which transports the BGP table to a YAML format that consists of all Transport Links.

Because of the tiered architecture of BGP announcements, the Tier 2 and the Tier 3 routes are automatically withdrawn if the Tier 1 Transport Link goes down. Hence, no further distribution of error handling messages is needed from the Tier 1 control agent.

Figure 9.17:BGP announcements Tier 1.

Tiers 2 and 3 Interfaces

The Tier 2 and the Tier 3 control plane interfaces consist of BGP messages previ-ously explained in Section9.4.2. Figure9.18exemplifies how the original SFPR-RD messages [10] are changed into two new versions of SFRP-SFPR-RD (Tier 2) and SFPR-E-RD (Tier 3) messages. The SFRP-RD message contains all the SFC hops

in the SFC and describes the inner Encryption Links as SFPR-E-RD inner hops.

Each hop consists of a Service Index (SI) that is decremented for every hop. The Route Distinguisher (RD) globally identifies the Compute Node (AS number + IP address) and also contains a Service Function Identifier (SFI) that defines the VNF instance ID (i.e., AS1:82.147.36.200,3). This RD is also the global VNF iden-tifier used for authentication (see Section 9.4.2 - Distributed Key Management Services).

The setup of the Tier 2 and the Tier 3 peering between network controllers and Compute Nodes are considered domain specific and assumed as manually provi-sioned. Tiers 2 and 3 control agents on the Compute Node contain a domain spe-cific application interface that enables the attachment and detachment of a VNF to the network, in order to announce the presence of a VNF on the Compute Node.

The control agents on the network controller correspondingly hold an interface that listens for incoming rendered SFCs. In summary, the interfaces to the Tier 2 and 3 services are:

- BGP speakers on Compute Nodes that announce connected VNFs (SFIR and SFIR-E).

- A BGP speaker on the network controller that announces the SFCs (SFPR-RD and SFPR-E-(SFPR-RD).

- Compute Node agent configuration interfaces for maintaining SFIRs and SFIR-Es.

- A Network controller agent configuration interface to maintain SFPR-RDs and SFPR-E-RDs.

- A Network controller agent interface that can transform YAML into BGP Tiers 2 and 3 routes and vice versa.

The VPN Gateway

The VPN gateway includes network protocol interfaces as follows:

- An IPVPN BGP peering interface peering towards one or more Service Pro-vider neighbours.

- A VPN tunnel or a direct interface to all other Service Providers.

- A BGP peering interface towards the Tier 1 route reflector that announces the VPN links.

- A Tiers 2 and 3 BGP peering over the Transport Link.

- A configuration interface such as RESTconf or CLI to set up VPN links.

- A KMS server interface to accept VPN connections authorized by the KMS server.

9.5. Protocol and Interfaces 151

Figure 9.18:BGP announcement Tier 3.

Key Management Service Interfaces

The KMS server implements a protocol that can provide an IPsec Security Associ-ation (SA) between two VNFs running encrypting services. The protocol defines a trusted KMS server with two random endpoints (instantiated encrypting VNFs) as the client and the server, where the KMS instructs the VNFs to establish an SA. For the initializing phase, the KMS server and VNFs utilize a Public Key Infrastructure (PKI) to establish a connection between each other, which the Kerberized Internet Negotiation of Keys (KINK) protocol is referring to as PKINIT [23]. Hence, cer-tificates are issued for peers by the use of public and private keys instead of using passwords. The public keys are distributed over BGP and secured by secure origin BGP (soBGP) [24]. The KMS service provides services such as ticket granting to ensure the integrity of messages to the server. To ensure a two-way authorization, an additional Preshared Key (PSK) authentication is added (Figure 9.19) to the protocol. The PSK is pre-provisioned by the orchestration layer. The KMS server protocol follows the same procedure for Tier 1–3 authentications, where the “user identity” is the only difference (see Section9.4.2- Distributed Key Management

Services).

An additional feature to the authentication and key negotiation protocol is the cap-ability to inform the endpoint about the IP-address of the remote endpoint. After authentication of the endpoints, the remote endpoint address together with a new dynamic shared key is offered to the endpoints by the KMS server. Next, the endpoints establish a direct link between for the SA negotiations. Furthermore, implementation guidelines about the KMS protocol are given in Section9.6.

Figure 9.19:The KMS protocol (simplified).

In this architecture, the KMS server is simplified to only include one instance. In real life deployment, the number of KMS instances should reflect the number of control plane tiers. In summary, the KMS server holds two service interfaces:

- An authentication protocol interface used by the encryption services and the VPN gateway,

- A management interface to maintain the “user” identities (Section9.4.2- Dis-tributed Key Management Services) and their corresponding PSKs.