• No results found

A Linux implementation of MulTFRC

N/A
N/A
Protected

Academic year: 2022

Share "A Linux implementation of MulTFRC"

Copied!
90
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

Master thesis:

A Linux implementation of MulTFRC

Adrian Jørgensen

Master’s Thesis Spring 2014

(2)
(3)

Master thesis:

A Linux implementation of MulTFRC

Adrian Jørgensen 19th May 2014

(4)

ii

(5)

Abstract

The use of multimedia streaming on the internet is increasing. With this the need for new protocols is created. The Datagram Congestion Control Protocol (DCCP) can support this better than other existing protocols like Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).

To keep the multimedia streaming from causing congestion within the net- work, the use of congestion control algorithms is required. The imple- mentation of DCCP in the Linux kernel supports two such algorithms. The one concerning this thesis it called TCP-Friendly Rate Control (TFRC). This thesis will implement a new congestion control algorithm called MulTFRC.

MulTFRC is made to provide a number of cumulative TFRC flows. This re- lieves application of complexity concerning the use of multiple TFRC flows to provide prioritization for its services.

This thesis followed the steps of the internet draft "MulTFRC: TFRC with weighted fairness" [8] to implement MulTFRC. Evaluation of the implementation showed that the algorithm works, but that there exists flaws in the Linux kernel implementation of DCCP.

(6)

iv

(7)

Contents

I Introduction 1

1 Background and related work 5

1.1 OSI-Model . . . 5

1.2 Linux . . . 7

1.2.1 User space . . . 7

1.2.2 Kernel space . . . 7

1.3 Protocols . . . 7

1.3.1 Transmission Control Protocol . . . 7

1.3.2 User Datagram Protocol . . . 8

1.3.3 Datagram Congestion Control Protocol . . . 9

1.3.4 TCP-Friendly Rate Control (TFRC) . . . 9

1.3.5 MulTFRC . . . 10

II The project 13 2 Planning the project 15 2.1 Compilation . . . 15

2.1.1 menuconfig . . . 16

2.1.2 Makefile . . . 16

3 Implementation 19 3.1 Creating a new CCID . . . 19

3.1.1 CCID interface . . . 20

3.1.2 CCID 5 . . . 21

3.1.3 Kconfig . . . 22

3.1.4 Makefile changes . . . 23

3.1.5 Feature negotiation . . . 24

3.1.6 Library replication . . . 26

3.2 MulTFRC implementation . . . 28

3.2.1 Changes to section 3 of RFC 5348 . . . 28

3.2.2 Changes to section 4 of RFC 5348 . . . 46

3.2.3 Changes to section 5 of RFC 5348 . . . 46

3.2.4 Changes to section 6 of RFC 5348 . . . 48

3.2.5 Setting N . . . 49

3.2.6 Additional changes . . . 51

(8)

4 Tools for testing 55

4.1 New machine script . . . 55

4.2 Iperf . . . 55

4.3 Tcpdump . . . 56

4.3.1 Tcpdump parsing . . . 57

4.4 Network emulator . . . 57

4.5 Test scripts . . . 57

5 Testing 59 5.1 Implementation testing . . . 59

5.1.1 MulTFRC algorithm tests . . . 59

5.1.2 Virtual testing . . . 60

5.1.3 Real environment testing . . . 60

5.2 Evaluation . . . 62

5.2.1 MulTFRC with N = 1 and 5% loss . . . 63

5.2.2 One TFRC flow and 5% loss . . . 63

5.2.3 One TCP flow and 5% loss . . . 64

5.2.4 MulTFRC and TFRC with changing loss . . . 65

5.2.5 Comparing MulTFRC N = 1 and N = 10 . . . 65

5.2.6 MulTFRC N = 1 vs one TCP flow and loss up to 10% 66 5.2.7 MulTFRC N = 1 vs one TCP flow and loss below 1% 66 5.2.8 MulTFRC N = 5 and five TFRC flows compared to one TCP flow . . . 67

5.2.9 MulTFRC N = 5 and five TFRC flows compared to five TCP flows . . . 68

5.2.10 MulTFRC with low N values together with one TCP flow . . . 69 5.2.11 Low throughput problem with MulTFRC and TFRC . 70

III Conclusion 73

6 Conclusion 75

vi

(9)

List of Figures

1.1 TCP/IP and OSI models . . . 6

5.1 Testbed set up . . . 62

5.2 Throughput of one MulTFRC flow . . . 63

5.3 Throughput of one TFRC flow . . . 64

5.4 Throughput of one TCP flow . . . 64

5.5 Smoothness comparison of MulTFRC and TFRC with a change of loss from 5% to 1% at 150 seconds. . . 65

5.6 Comparison of MulTFRC with N = 1 and N = 10. . . 66

5.7 MulTFRC N = 1 versus one TCP flow . . . 67

5.8 MulTFRC with N = 1 and one TFRC flow compared with one TCP flow separately. . . 67

5.9 MulTFRC with N = 5 and five TFRC flows compared with one TCP flow separately. . . 68

5.10 MulTFRC with N = 5 and five TFRC flows compared with five TCP flows separately. . . 68

5.11 MulTFRC with N = 5 and five TFRC flows compared with five TCP flows separately. With higher values of loss. . . 69

5.12 MulTFRC with cumulative flows set to one and below compared with one TCP flow. . . 70

(10)

viii

(11)

List of Tables

3.1 Algorithm tests . . . 36 5.1 Testbed hardware . . . 61

(12)

x

(13)

Part I

Introduction

(14)
(15)

Motivation

Internet bandwidth has increased substantially in availability in modern time. Along with it, so has the rise of online multimedia streaming services.

Data transfer for multimedia streaming services is not the same as sending and delivering data files. A data file must arrive at its receiver in the same size and order as it was sent. A receiver will usually never open or use the file until it has been received in its full size and been put together in the correct order. Therefore delay or loss of packets does not interfere with the impression of the file. The only noticeable difference is that the user will receive it a bit slower. This is obvious as the packets are delayed or the sender have to resend lost packets.

The result of delay on the connection with a multimedia stream is considerably more noticeable. If no mechanism is used to handle this, the multimedia service (e.g. video stream) will have to stall and wait for the packets that has been delayed in order to continue. One commonly used solution is to offer buffering of data. Only using a buffer does however introduce other potential problems concerning dropped packets and congestion. It is preferred that the service does not stall but rather just continues after a packet has been lost. The loss of a packet might just give the service a slight flaw, or it might even unnoticeable. For a video stream this could mean a blurry vision for less then a second. Imagine a person having a phone call. As the call experiences some sort of loss or delay in the network, the person would rather listen to worse quality of the voice from the person on the other end, than having to wait a little while for the packets that was lost to be retransmitted, or those that were delayed to be received, before hearing what the person on the other end said. As for the matter of congestion, it is important that the network connection does not receive more data than it can handle. A congested network will drop packets more often. It is therefore important to have a congestion control to make sure each connection does not contribute to introducing congestion in the network. For a multimedia service this means that the quality can not be higher than what the network can handle in order to provide a stable quality of the service.

File transfers usually increase the sending rate quickly to be able to send data as fast as possible. When a packet is lost the sending speed is usually cut in half by the congestion controller before it starts building back up again. Users of multimedia streams prefer to experience a stable quality of the service, rather than a connection which rapidly change the quality or freezes depending on the flow of packets. A congestion control used for multimedia services therefore needs to create a more stable flow of data rather than pushing the limits of the connection. It must provide an algorithm that increase and decrease the flow of data more slowly than a file transfer congestion control algorithm would do whenever a packet is dropped. The connection will seem more consistent and stable this way. It might take more time for the protocol used for media services to reach max capacity. The sending speed will however not decrease a much each time a packet is lost. When the connection is active for a considerable amount

(16)

of time, the throughput for both of these protocols could be more or less equal.

In order to provide a solution to the problems concerning multimedia streaming, these protocols and congestion controls needs to be implemen- ted on a level where they can be used by applications and not having to implemented in the applications themselves. Datagram Congestion Con- trol Protocol (DCCP) [7] is a protocol dedicated for sending data in a man- ner that multimedia services needs. In order to use a network connection a protocol should be fair to other data flows that uses it. The most used pro- tocol today is Transfer Control Protocol (TCP) [10], which is mostly used to send data files. DCCP already have a congestion control called TCP Friendly Rate Control (TFRC) [6] which provide multimedia streams with the capability to send and receive in a suitable manner. It does however not provide the application to control the importance and priority of the stream. If a system runs two multimedia services, and one of them is twice as important, the application will have to implement functionality to scale the sending rate.

Thesis description

This thesis will focus on an implementation of the congestion control called MulTFRC [8] within DCCP in the Linux kernel. MulTFRC provides the same functionality as TFRC [6], although as the name briefly reveals, it also provides the ability to send a defined number of cumulative TFRC streams. This means it will remove the extra complexity needed on the application level to achieve this and puts it into the transport layer.

The implementation of this thesis will be made following the internet draft "MulTFRC: TFRC with weighted fairness" [8]. This draft provides necessary changes that needs to be made in the kernel in order to support MulTFRC. The goal is to make a successful implementation and give an evaluation of it. Any problem that occurs with the implementation or evaluation will be discussed.

4

(17)

Chapter 1

Background and related work

When a network is congested it means that there is too much traffic on the network for it to handle. A congestion control is responsible for making sure that flows using a network does not cause a congestion. When for instance a loss of packets happens in a network, it could mean that the network is congested. The most common way a congestion control adapt to this is by slowing down the rate of which data are sent.

Most traffic on the internet uses some sort of congestion control in order to adapt the flow of data to the restrictions and capacity of the network they are using. Transmission Control Protocol (TCP) does has quite a few implemented congestion controls in the Linux kernel. E.g. Cubic and Reno are two fairly known. The Datagram Congestion Control Protocol (DCCP) is considered experimental within the Linux kernel. Thus it does not have as many implemented congestion controls as TCP. In fact, the Linux kernel does only offer two implemented congestion controls. One of these are the TCP Friendly Rate Control (TFRC).

The next few sections will give a brief explanation of information useful for understanding the content of this thesis and why MulTFRC is useful.

1.1 OSI-Model

The OSI-model consists of 7 layers, shown in figure 1.1, and was created as an international standardization for connections between systems. The bottom layer in the OSI-model is the physical layer. It is responsible for transmitting raw bits from the current system to another. This means it has to decide how much volts should be used to represent 1 and 0 bits, and how to interpret received volts into bits. The second layer to the bottom is the data link layer. Its duties is to create data frames of the input data and deliver it to the above layer, and it also has to make sure it does not drown the buffer on the receiver with data. The network layer handles how packets are routed from source to destination.

The transport layer is of most interest concerning this thesis, as the implementation will reside within it. The function of this layer is to accept data from the above layer and split it up if needed. Further more, it gains responsibilities depending on which protocol is used. I.e. the user can

(18)

Figure 1.1: TCP/IP and OSI models

decide weather this layer should deliver bytes and packets in the exact same order and at the same time guarantee that the receiver got all of the data, or the user can choose a protocol that has some or none of these properties. Relevant protocols will be explained in section 1.3.

The session layer enables users to create sessions between each other and then keeps track of whose turn it is to transmit, tokens and synchronization. The presentation layer presents the bits and bytes from the lower layers in a way that the above layer requests them. This is done so machines with different representations of data can communicate. And the last layer of the OSI-model is the application layer. Different protocols are used here than in the transport layer. These protocols are used by the users.

E.g. HTTP (Hyper Text Transfer Protocol) [3] which is used to download web pages. When the web page is downloaded the transport layer will use TCP to control that every packet is received in the correct order.

Today it is not unusual to skip the session and presentation layer. And as you can see in figure 1.1, they are not represented in the TCP/IP-model.

The TCP/IP-model has also combined the data link layer and the physical layer into a host-to-network, and they have renamed network to internet.

The focus of the TCP/IP-model lies within the three top layers. E.g. IP (Internet Protocol) on the internet layer, TCP on the transport layer and HTTP on the application layer.

A typical router will use the three bottom layers within the OSI-model or the two bottom ones in the TCP-IP-model, while a typical end system will use all the layers in the TCP-IP-model.

6

(19)

1.2 Linux

In 1991 a Finnish student named Linus Torvalds released the first version of the source code now known as the Linux kernel. His motivation for doing so were based on frustration concerning licensing and also because of his interest in operating systems. Today the Linux kernel is used as a base for many different operating systems. Those systems are called Linux distributions. Debian is a fairly popular distribution and are also the one used to run the kernel implementation of this thesis.

1.2.1 User space

The definition ofUser spaceis all code that does not run within the kernel.

This means that all programs like terminals, web browsers and editors, to mention some, runs inUser space. Libraries of various kinds exists inUser space. E.g. the C library, with open, exec and so on. Advanced graphic drivers and system daemons for sound, and the like, also run inUser space.

1.2.2 Kernel space

The responsibility ofKernel space are to present system calls of the Linux kernel. E.g. file systems and network subsystems. It handles process scheduling and memory management. This thesis will be implemented in the network subsystem of the Linux kernel.

1.3 Protocols

There are two protocols which are mainly used for transporting data in computer networks. The first, Transmission control Protocol (TCP) [10], as the connection oriented and the second, User Datagram Protocol (UDP) [11], as the connectionless. The implementation of this master thesis will not use either of these, but instead the rather new Datagram Congestion Control Protocol (DCCP) [7]. In the next sections a brief introduction will be given of these three.

1.3.1 Transmission Control Protocol

Transmission Control Protocol (TCP) [10] was one of the first available protocols for sending data files between end systems. Today it is the most used protocol on the internet. It provides all the basic abilities an application needs to send data files.

At first a connection is established between the sender and the receiver.

The connection set up is done with what is called a 3-way handshake. It is called a 3-way handshake because 3 successfully sent packets are needed to establish the connection. A connection establishment uses two bits in the TCP header, namely the synchronize bit (SYN) and the acknowledge bit (ACK). As an example lets say we have a host A and a host B. Host A first send a packet with the SYN bit set to 1 to host B. Host b accepts the

(20)

connection and sends back a synchronize-acknowledgement (SYN-ACK) packet. A SYN-ACK packet means that both SYN and ACK bits are set to 1. When host A receives the SYN-ACK packet it knows host B has accepted the connection and sends back a packet with ACK bit set to 1 to tell host B that the connection is established. When host B receives the ACK packet the connection is considered established at both ends.

When the connection has been established the sender starts to send data to the receiver. The receiver then have to acknowledge that it receives the data. Depending on what method for sending is being used the acknowledgements can be sent back often or more rare. It is usual to acknowledge packets in a certain window. E.g. a sender can send 20 data packets and the receiver then only needs to send an acknowledgement for packet number 20. If it does not receive packet number 20 it will send an acknowledgement for the packet received with the highest sequence number. Because TCP sends acknowledgements of packets received it is a reliable protocol.

The termination of a connection is done with a 4-way handshake, which means 4 successfully sent control packets.

TCP also orders the packets before sending it to the application. If a packet is lost, the transport layer has to make the sender retransmit that packet and wait for it. It is guaranteed that the receiver application will get the data in the exact order it was sent. To combat congestion, the connection between transceivers is altered rapidly to respond to loss or delay. When allowed by the network, TCP will also quickly increase the sending rate for paths with high bandwidth. In short, TCP provides the most basic functionality which is needed for transfer of data files. And this is what makes it popular.

MulTCP

MulTCP is a protocol that offers n cumulative TCP flows and provides this as one interface to the application layer, therefore removing the complexity behind creating more than one flow within the application. The idea itself is great but there has not been any successful attempts to push this as a replacement for the current use of TCP.

1.3.2 User Datagram Protocol

User Datagram Protocol (UDP) [11] are together with TCP the most used transport protocols on the internet. UDP offers a connectionless transport of data for the application layer, thus it gives more control to the application than TCP. A connectionless service does not need any set up or tear down of the connection. The sender application sends a packet and the receiver application can decide on what to do with it. If the packet is some sort of request, the receiver application will most likely respond. Not response is sent from the transport layer.

Every application has to implement their own algorithms and mechan- isms to control the flow of data. This does give the potential for less over-

8

(21)

head than TCP would offer, but is also more complex for the application itself. To date, it is mostly used for applications that happens in real time where lost packets are okay. E.g. video call and various types of real time games. It is also useful in some client server situations. For instance where no connection set up is required and the client sends a request that expects a short reply. If the client does not receive a reply it can simply send a new request after a time out. In the case of Domain Name System (DNS) [9], only two packets are sent. (Request and reply) The network overhead in a connection set up and tear down for this kind of communication would increase the amount of packets sent several times over.

The UDP packet consist of source- and destination port, length and checksum. This means it can forward the packet to the application that is attached to the given port and validate the integrity of the packet. The checksum is however optional and in some cases, like digitalized speech where quality does not matter as much, using the checksum field is not necessary.

1.3.3 Datagram Congestion Control Protocol

Datagram Congestion Control Protocol (DCCP) [7] was created as a better alternative than UDP for services that does not need all of what TCP offers.

It has reliable set up and tear down of connections like TCP, but uses unreliability for its data flows while providing congestion control. This means it could be a better alternative for multimedia streaming services than both UDP and TCP. It was standardized in 2006 and even so it is rarely used. This might be due to the fact that it is not natively supported in any of the big operating systems. The Linux kernel contains an implementation of DCCP. It is however marked as experimental, which means Prompt for development and/or incomplete code/drivers.

The congestion control algorithms of DCCP are called CCIDs (Conges- tion Control IDentifiers) in the Linux kernel. There is only two official CCIDs implemented currently. They are called the TCP-like congestion control as CCID 2 and the TCP-Friendly rate control (TFRC) as CCID 3.

1.3.4 TCP-Friendly Rate Control (TFRC)

Flows have to share the available bandwidth on most networks. Thus it is common to have flows adapt to changes and share the network equally.

TCP-Friendly Rate Control (TFRC) [6] is a congestion control algorithm in DCCP that is specifically made to fill the needs of real time communication and the like. It is designed to share the bandwidth of a network fairly with TCP, hence the name. TFRC is also designed to have much less variation of the sending rate than TCP and that is one of the factors that makes it more suitable for real time communication.

While offering a more stable sending rate there is also a downside to TFRC compared to TCP. If changes occur in the network that affects the sending rate, TFRC is much slower at responding to this than TCP. This implies that it is not well suited for sending much data in a short amount of

(22)

time, and to use it for anything else than applications that requires a smooth sending rate would be a waste when there are TCP-like alternatives.

TFRC is made to calculate some of the information on the receiver side of a connection. This potentially relieves the server with more CPU and memory to serve more concurrent connections.

TFRC algorithm

To calculate the sending rate TFRC uses the round trip time, packet size and loss event rate. A loss event is defined as a window of data (round trip time) where one of the packets is lost or when Explicit Congestion Notification (ECN) [12] is marked on one of that packets. The last 8 loss events are used to calculate and obtain the smoothness needed for TFRC.

The loss event rate is calculated on the receiving end of the connection.

The sender uses the round trip time of the packets sent back from this calculation to find the round trip time. The packet size is obviously known to the sender. And as all of the necessary information is available to the sender it can now calculate the sending rate with the congestion control algorithm. To be fair to TCP flows, the TFRC equation must use one of the equations for finding TCP throughput as a function of round trip time and the loss event rate. The throughput equation for Reno TCP is used in the Linux kernel to achieve this.

1.3.5 MulTFRC

As one might interpret from the name, MulTFRC [2] extends the abilities of TFRC by providing the equivalent of multiple TFRC flows. The idea is to offer a protocol that can provide N number of cumulative TFRC flows with only one connection socket, thus providing the bandwidth ofN connections with one interface to the application. If an application wants the bandwidth of two TFRC flows, it can simply set up the connection socket with MulTFRC and the value ofNbeing 2, rather than setting up two TFRC connections within the application and having complex merging and splitting mechanisms to combine these. A lot of the complexity concerning connection set up is then removed from the application level and put into the transport layer. This makes it easier to create all applications that requires bandwidth of more or less than one TFRC flow to keep its flow prioritized among other flows. While MulTFRC removes some application complexity it would also remove a mentionable amount of packet overhead and should therefore also be a better solution network wise. E.g. two TFRC flows would have the overhead of 2 flows while MulTFRC withN

= 2would have the overhead of one flow but provide the sending rate of two.

Priority for a MulTFRC flow set withNless than one would make it less important than other TCP-like flows. A higher value than one would make it more prioritized than TCP-like flows. AndNequal to one would make it equally important.

10

(23)

The internet draft "MulTFRC: TFRC with weighted fairness" [8], written by Dragana Damjanovic and Michael Welzl, describes where MulTFRC is different than TFRC and the necessary changes and steps needed to convert the TFRC implementation of the Linux kernel into MulTFRC.

MulTFRC algorithm

At first glance one might think the result of the TFRC equation could be multiplied by N to achieve the desired effect of MulTFRC. This would however not suffice. The loss event probability would be needed for N flows but there is in fact only one flow. MulTFRC calculates the amount of losses in a loss event. This amount is represented by the variable calledj.

Together with the rest of the needed variables from the TFRC equation it can be used to calculate the sending rate forNcumulative flows. MulTFRC uses floating-point arithmetic for calculation precision.

The algorithm of MulTFRC has been proven to work in the PhD thesis

"Parallel TCP Data Transfers: A Practical Model and its Application" [1] by Dragana Damjanovic.

(24)

12

(25)

Part II

The project

(26)
(27)

Chapter 2

Planning the project

Getting to know where in the kernel the code is suppose to be applied is a natural start. According to RFC 4342 [4] the Linux kernel implementation of TFRC is called CCID 3. While looking through the kernel with tools like grep and vim, references to TFRC are mostly found in net/dccp/ccids.

The algorithm for TFRC is located in the library folder for CCIDs, more precisely in the file callednet/dccp/ccids/lib/tfrc_equation.c. It seemed likely that the equation for MulTFRC should be put in the same place and this is where the work towards a MulTFRC implementation started. To have an easy way to compare both TFRC and MulTFRC for evaluation, a decision to copy all code for TFRC (CCID 3) and create a MulTFRC version that worked equally were the first step. It would be easier to switch congestion control algorithm on a running system than having to reboot into a original kernel between tests. As there are rumoured that someone is already working on a CCID 4 it was decided that the copy will be named CCID 5.

After the copy has been made, changing it to meet the requirements of MulTFRC is the next step. The document "MulTFRC: TFRC with weighted fairness" [8] provides detailed instructions of what has to be changed for MulTFRC to work. These changes are base upon RFCs ([4] and [5]) about the implementation of TFRC in Linux. Using these RFCs is also useful in the matter of locating these fragments of code that needs to be changed, as "MulTFRC: TFRC with weighted fairness" [8] is specifically referring to sections in these RFCs.

2.1 Compilation

The Linux kernel in its entirety needs to be compiled in order to run the implemented code. The first time one does this it takes quite some time to finish the compilation. After the kernel has subdued a full compilation once, new changes to the kernel will only have to be recompiled with certain necessary and affected parts. To relieve the time and effort it takes to reboot a machine in order to test the newly compiled kernel, use of a virtual machine is much valued. A virtual machine also provides some extra features which is nice for working with Linux kernel implementation.

(28)

Besides writing to kernel log and other defined prints to files on the host, it also provides an easy set up to check that the implementation works in a virtual network. E.g. two virtual machines is much easier to set up using the custom compiled kernel than two physical machines.

2.1.1 menuconfig

There are several tools one can use to configure the Linux kernel before it is compiled. The maybe most commonly used tool is themenuconfig. When all required tools are install on a systemmenuconfigcan be started from the root folder of the kernel by running the command below.

$ make menuconfig

This will present a categorical menu where the user can navigate around and tweak the kernel to what ever configuration is required. There are three main choices that can be made for each option. y means include feature in kernel, m means compile feature as a module loadable by the kernel during runtime andnmeans do not include in the kernel. All the presented choices are configured in theKconfigfiles of the kernel and the building of the kernel is done by the help of makefiles.

2.1.2 Makefile

Makeis a program that reads makefiles and do the instructions they contain.

Usually the instructions are aimed at building executable programs. The Linux kernel is built by the instructions of a makefiles located at various directories throughout the kernel.menuconfigcan be used to configure what parts of the kernel these makefiles build. Actually, themenuconfigis also a target of theMakeutility itself.

Building in Debian

Debian has tools that makes it easy to compile and install the kernel within a common Linux operating system. For that reason it was chosen as a platform for the test machines. Version 6.0.5 of Debian was the most recently released operating system at the time of the implementation. For consistency this version, along with the long-term stable release version 2.6.32 of the kernel, was used during the implementation and evaluation of this thesis. The kernel itself can be fetched from various different sites on the internet. The easiest way to get it using Debian is by running the following command.

$apt-get install linux-source-2.6.32

Other packages needed in order to compile the kernel in Debian:

build-essential, kernel-package, bzip2 and libncurses5-dev 16

(29)

Thereafter a compressed kernel is located at/usr/src/linux-source-2.6.32.tar.bz2.

The next step is to extract it and then configure the Linux kernel source for compilation. This can be done with a tool calledmenuconfigby running the follow command in the root folder of the extracted Linux kernel.

$make menuconfig

From the presented menu the user can enable and disable features of the kernel as pleased. The features for DCCP obviously needs to be enabled to compile the DCCP protocol and the implementation of MulTFRC. The DCCP option can be found under Networking support and then under Networking options. There the optionThe DCCP Protocol (EXPERIMENTAL) can be set. If it is set with*(Marked) it will be build in the kernel during compilation and if it is set withM it will be compiled and linked to the kernel as a loadable module. If nothing is chosen, the kernel will be built without support for DCCP. Next in order is a kernel compilation. A recognizable name of the compiled image can be given, although grub will automatically pick the most recently installed image as default boot option.

The command to compile (with-dccp-multfrcappended to the name):

$ make-kpkg –append-to-version=-dccp-multfrc kernel_image – initrd binary

And lastly the command for installing the compiled image of the Linux kernel.

$dpkg -i linux-image-2.6.32-dccp-multfrc_i386.deb

The installed kernel can now be chosen during start up from the menu presented by grub.

(30)

18

(31)

Chapter 3

Implementation

In order to start the implementation of MulTFRC, a new CCID (CCID 5) must be added. Making sure that CCID 5 works exactly as CCID 3 is important before any alteration towards MulTFRC is made. Running tests and comparing the results of them with each other will determine whether the new CCID 5 has been correctly added. Therefore the first round of kernel implementation will only consist of creating CCID 5 without any of the changes or additions needed by MulTFRC. Finding all chunks of code where CCID 3 is used is crucial for creating CCID 5. As a first version CCID 5 will use TFRC where CCID 3 does.

3.1 Creating a new CCID

To start of the creation of a new CCID an enum in include/linux/dccp.his where the first addition was made in the kernel. The idea behind the use of enums in this file is to provide an symbolic identifier for each CCID. These identifiers are often more readable and memorable than what they represent. E.g. DCCPC_CCID5 = 5 means that whenever the value DCCPC_CCID5 is set for an integer, the value set for the integer is in fact 5. This enum field was added on line number 5 in listing 3.1 and is the only alteration of include/linux/dccp.h. In this case it is not all that hard to remember as the value is the id number for CCID 5, but it is still a good practice to use these kind of enums even for easy memorable representations. In contrast, a less readable example could be DCCPF_SEND_LEV_RATE = 192. An example usage forDCCPC_CCID5 can be seen in listing 3.8 on line number 56.

Listing 3.1: include/linux/dccp.h 1 /∗ DCCP CCIDS ∗/

2 enum{

3 DCCPC_CCID2 = 2,

4 DCCPC_CCID3 = 3,

5 DCCPC_CCID5 = 5,

6 };

(32)

3.1.1 CCID interface

To set up an interface for a specific CCID a struct ofccid_operationsneeds to be declared. If the kernel is compiled withCONFIG_IP_DCCP_CCID5, which means it is compiled with CCID 5 enabled, thennet/dccp/ccid.hneeds to know how to call specific functions for CCID 5. An extern struct with CCID 5 specific information will therefore be available after compilation of the kernel in net/dccp/ccid.h. (See listing ??) Thus it is also available fornet/ddcp/ccid.c. net/dccp/ccid.c will activate the CCID given it has been defined and therefore is amongst the available extern structs.

Listing 3.2: net/dccp/ccid.h 1 #ifdefCONFIG_IP_DCCP_CCID5

2 extern structccid_operations ccid5_ops;

3 #endif

Kprobe

The filenet/dccp/probe.cis used to enable observation of DCCP flows with a program calledkprobe. To keep things as similar to CCID 3 as possible, a handler for CCID 5 was also added. In listing 3.3 the function that is called whenever a packet is sent with DCCP is shown. A test to check if the CCID version used by the sent packet is CCID 5, has to be added to make this work with CCID 5. (Line number 11 in listing 3.3) When CCID 5 is used a struct (Line number 6 in listing 3.3) with socket information for the sender half will be set with current socket information. Following is a test to check whether this struct is set or not. When it is set,kprobecan now non- disruptively check every variable that is printed from this handler. Before MulTFRC is implemented this print is the same as the one for CCID 3, but more information will be sent eventually. Thus more information needs to be printed as well.

Listing 3.3: net/dccp/probe.c 1 static intjdccp_sendmsg(structkiocb ∗iocb,structsock ∗sk,

2 structmsghdr ∗msg, size_t size)

3 {

4 const structinet_sock ∗inet = inet_sk(sk);

5 structccid3_hc_tx_sock ∗ccid3_hctx = NULL;

6 structccid5_hc_tx_sock ∗ccid5_hctx = NULL;

7

8 if(ccid_get_current_tx_ccid(dccp_sk(sk)) == DCCPC_CCID3) 9 ccid3_hctx = ccid3_hc_tx_sk(sk);

10

11 if(ccid_get_current_tx_ccid(dccp_sk(sk)) == DCCPC_CCID5) 12 ccid5_hctx = ccid5_hc_tx_sk(sk);

13

14 if(port == 0 || ntohs(inet>dport) == port ||

15 ntohs(inet>sport) == port) {

16 if(ccid3_hctx)

17 printl("%pI4:%u %pI4:%u %d %d %d %d %u "

20

(33)

18 "%llu %llu %d\n",

19 &inet>saddr, ntohs(inet>sport), 20 &inet>daddr, ntohs(inet>dport), size,

21 ccid3_hctx>ccid3hctx_s, hctx>ccid3hctx_rtt, 22 ccid3_hctx>ccid3hctx_p, hctx>ccid3hctx_x_calc, 23 ccid3_hctx>ccid3hctx_x_recv >> 6,

24 ccid3_hctx>ccid3hctx_x >> 6, hctx>ccid3hctx_t_ipi);

25 else if (ccid5_hctx)

26 printl("%pI4:%u %pI4:%u %d %d %d %d %u "

27 "%llu %llu %d\n",

28 &inet>saddr, ntohs(inet>sport), 29 &inet>daddr, ntohs(inet>dport), size,

30 ccid5_hctx>ccid5hctx_s, hctx>ccid5hctx_rtt, 31 ccid5_hctx>ccid5hctx_p, hctx>ccid5hctx_x_calc, 32 ccid5_hctx>ccid5hctx_x_recv >> 6,

33 ccid5_hctx>ccid5hctx_x >> 6, hctx>ccid5hctx_t_ipi);

34 else

35 printl("%pI4:%u %pI4:%u %d\n",

36 &inet>saddr, ntohs(inet>sport), 37 &inet>daddr, ntohs(inet>dport), size);

38 }

39

40 jprobe_return();

41 return0;

42 }

3.1.2 CCID 5

The next step towards creating CCID 5 is to create bothnet/dccp/ccids/ccid5.c and net/dccp/ccids/ccid5.h. As they will not be using MulTFRC, the most important change concerning these to files is in the header file (net/dccp/ccids/ccid5.h). This is where the socket options for both the sender half and the receiver half of the connection is defined in structs. See listing ??. In this version they both have references to TFRC structs.

The sender side includes struct tfrc_tx_info which contains important information concerning sending rate (ccid5hctx_x, ccid5hctx_x_recv and ccid5hctx_x_calc), round trip time (ccid5hctx_rtt) and current loss event rate (ccid5hctx_p). Packet size (ccid5hctx_s) is also contained, although this is set outside of the TFRC struct. The parameters used to calculate the sending rate (ccid5hctx_x_calc) are packet size (ccid5hctx_s), round trip time (ccid5hctx_rtt) and current loss event rateccid5hctx_p. Together with these three parametersNandjwill also be needed to calculate the sending rate (ccid5hctx_x_calc) with MulTFRC.

Listing 3.4: net/dccp/ccids/ccid5.h 1 structccid5_hc_tx_sock {

2 structtfrc_tx_info ccid5hctx_tfrc;

3 #defineccid5hctx_x ccid5hctx_tfrc.tfrctx_x

4 #defineccid5hctx_x_recv ccid5hctx_tfrc.tfrctx_x_recv 5 #defineccid5hctx_x_calc ccid5hctx_tfrc.tfrctx_x_calc 6 #defineccid5hctx_rtt ccid5hctx_tfrc.tfrctx_rtt

(34)

7 #defineccid5hctx_p ccid5hctx_tfrc.tfrctx_p 8 #defineccid5hctx_t_rto ccid5hctx_tfrc.tfrctx_rto 9 #defineccid5hctx_t_ipi ccid5hctx_tfrc.tfrctx_ipi 10 u16 ccid5hctx_s;

11 enum ccid5_hc_tx_states ccid5hctx_state:8;

12 u8 ccid5hctx_last_win_count;

13 ktime_t ccid5hctx_t_last_win_count;

14 structtimer_list ccid5hctx_no_feedback_timer;

15 ktime_t ccid5hctx_t_ld;

16 ktime_t ccid5hctx_t_nom;

17 u32 ccid5hctx_delta;

18 structtfrc_tx_hist_entry ∗ccid5hctx_hist;

19 structccid5_options_received ccid5hctx_options_received;

20 };

21 structccid5_hc_rx_sock {

22 u8 ccid5hcrx_last_counter:4;

23 enum ccid5_hc_rx_states ccid5hcrx_state:8;

24 u32 ccid5hcrx_bytes_recv;

25 u32 ccid5hcrx_x_recv;

26 u32 ccid5hcrx_rtt;

27 ktime_t ccid5hcrx_tstamp_last_feedback;

28 structtfrc_rx_hist ccid5hcrx_hist;

29 structtfrc_loss_hist ccid5hcrx_li_hist;

30 u16 ccid5hcrx_s;

31 #defineccid5hcrx_pinv ccid5hcrx_li_hist.i_mean 32 };

3.1.3 Kconfig

The filenet/dccp/ccids/Kconfigneeds to add the option of CCID 5 in order for make menuconfigto present it as an option and for theMakefileto compile it.

The choices made with make menuconfig is reflected intonet/dccp/Makefile (Listing 3.6). Basically what the addition in listing 3.5 does is to provide CCID 5 as an option if DCCP is either enabled within the kernel (IP_DCCP

= y) or as a module (IP_DCCP = y). It also provides the option for enabling debugging messages and the option to set a differentnofeedback timer. The TFRC library needs to be added if kernel is compiled with either or both CCID 3 and CCID 5. On line 29 and 32 in listing 3.5 a test to see whether CCID 3 or CCID 5 is enabled and if so it also enables TFRC library and debug messages. It is possible to give them all a default value, but as this has not been added for CCID 3 it will not be so for CCID 5 either.

Listing 3.5: net/dccp/ccids/Kconfig 1 config IP_DCCP_CCID5

2 bool "CCID5 (MulTFRC) (EXPERIMENTAL)"

3 def_bool y if(IP_DCCP = y || IP_DCCP = m)

4 −−−help−−−

5 CCID5 denotes MulTFRC, an equationbased ratecontrolled 6 congestion control mechanism.

7

8 If in doubt, say N.

22

(35)

9

10 config IP_DCCP_CCID5_DEBUG

11 bool "CCID5 debugging messages"

12 depends on IP_DCCP_CCID5

13 −−−help−−−

14 Enable CCID5 specific debugging messages.

15

16 The debugging output can additionally be toggled by setting the 17 ccid5_debug parameter to 0 or 1.

18

19 If in doubt, say N.

20

21 config IP_DCCP_CCID5_RTO

22 int"Use higher bound for nofeedback timer"

23 default100

24 depends on IP_DCCP_CCID5 && EXPERIMENTAL

25 −−−help−−−

26 Use higher lower boundfor nofeedback timer expiration.

27

28 config IP_DCCP_TFRC_LIB

29 def_bool yif(IP_DCCP_CCID3 || IP_DCCP_CCID5) 30

31 config IP_DCCP_TFRC_DEBUG

32 def_bool yif(IP_DCCP_CCID3_DEBUG || IP_DCCP_CCID5_DEBUG)

3.1.4 Makefile changes

During compilation of the kernel the net/dccp/Makefile is called to build all necessary files and optional files needed by the enabled options chosen with menuconfig. All options with dccp-y will be built. With some exceptions, most of the options in net/dccp/Makefile is determined by menuconfig. CCID 2 is enabled by default whenever DCCP is compiled and has dccp-y hard coded. CCID 5 will be an optional choice in menuconfig and the option in net/dccp/Makefile is defined by dccp-$(CONFIG_IP_DCCP_CCID3)as either y(built in kernel) or m (built as module). Any files added for CCID 5 also have to be added in net/dccp/Makefile. Before MulTFRC is added only line number 8 in listing 3.6 has to be added.

Listing 3.6: net/dccp/Makefile 1 dccpy += ccids/ccid2.o ackvec.o

2 dccp$(CONFIG_IP_DCCP_CCID3) += ccids/ccid3.o

3 dccp$(CONFIG_IP_DCCP_TFRC_LIB) += ccids/lib/tfrc.o \

4 ccids/lib/tfrc_equation.o \

5 ccids/lib/packet_history.o \

6 ccids/lib/loss_interval.o

7

8 dccp$(CONFIG_IP_DCCP_CCID5) += ccids/ccid5.o

(36)

3.1.5 Feature negotiation

The filenet/dccp/feat.cis responsible for feature negotiation concerning the set up of a DCCP connection. It is specified in RFC4340 [7], Section 6. A few lines of code has to be added in this file to provide a feature negotiation for CCID 5.

Listing 3.7: net/dccp/feat.c 1 intsysctl_dccp_rx_ccid __read_mostly = 5;

2 intsysctl_dccp_tx_ccid __read_mostly = 5;

To make things easier during testing CCID 5 were set to default CCID for DCCP. This feature is achieved by setting the two integers in listing 3.7 to the value of 5. This ensures that CCID 5 is the default congestion control mechanism for both sending (sysctl_dccp_tx_ccid) and receiving (sysctl_dccp_rx_ccid) packets when the DCCP protocol is used. These kind of variables are the default value of kernel parameters and can be found in various places within the kernel. They can also be changed during runtime.

The previously default value were 3 (CCID 3). Whenever CCID 3 needs to be tested it can simply be set by the use of two commands, one for sending and one for receiving. Arguably this could be left untouched, although it is more than likely that CCID 5 will be the most used CCID during this implementation and testing of this thesis. Therefore this was done to avoid setting the sysctl variables after each boot in order to run tests with CCID 5. I.e. sysctl_dccp_rx_ccidandsysctl_dccp_tx_ccidhas to be set to 3 in order to run tests with TFRC (CCID 3) to compare them with MulTFRC (CCID 5).

To set these variables with the terminal either the kernel has to be compiled with DCCP built in or the DCCP kernel module has to have been used or switched on withmodprobe. After this has been donesysctlmust be called with the write parameter-w to set a new value for the kernel parameter.

An example of how one can change the kernel parameters to use a different congestion control algorithm in DCCP than what is currently used is shown below.

$ modprobe dccp(Needed if DCCP is compiled as a module)

$ sysctl -w net.dccp.default.rx_ccid=3

$ sysctl -w net.dccp.default.tx_ccid=3

The next change in net/dccp/feat.c are the dependencies that de- scribes properties for the connection on each side of a connection. E.g.

DCCPF_SEND_ACK_VECTOR. These properties should be the same in CCID 5 as in CCID 3. It is clearly stated in the comments that each CCID should have its own corresponding dependency table, and therefore a table equal to CCID 3 was made instead of returning the same table for CCID 5.

See code in listing 3.8.

Listing 3.8: net/dccp/feat.c

1 static const structccid_dependency ccid5_dependencies[2][5] = { 24

(37)

2 {/∗ Dependencies same as CCID3 ∗/

3 {

4 .dependent_feat = DCCPF_SEND_ACK_VECTOR,

5 .is_local = true,

6 .is_mandatory = false,

7 .val = 0

8 },

9 {

10 .dependent_feat = DCCPF_SEND_LEV_RATE,

11 .is_local = true,

12 .is_mandatory = true,

13 .val = 1

14 },

15 {

16 .dependent_feat = DCCPF_SEND_NDP_COUNT,

17 .is_local = false,

18 .is_mandatory = true,

19 .val = 1

20 },

21 { 0, 0, 0, 0 },

22 },

23 {

24 {

25 .dependent_feat = DCCPF_SEND_ACK_VECTOR,

26 .is_local = false,

27 .is_mandatory = false,

28 .val = 0

29 },

30 {

31 .dependent_feat = DCCPF_SEND_LEV_RATE,

32 .is_local = true,

33 .is_mandatory = true,

34 .val = 1

35 },

36 {

37 .dependent_feat = DCCPF_ACK_RATIO,

38 .is_local = true,

39 .is_mandatory = false,

40 .val = 0

41 },

42 {

43 .dependent_feat = DCCPF_SEND_NDP_COUNT,

44 .is_local = true,

45 .is_mandatory = false,

46 .val = 1

47 },

48 { 0, 0, 0, 0 }

49 }

50 };

51 switch(ccid) { 52 caseDCCPC_CCID2:

53 returnccid2_dependencies[is_local];

54 caseDCCPC_CCID3:

55 returnccid3_dependencies[is_local];

(38)

56 caseDCCPC_CCID5:

57 returnccid5_dependencies[is_local];

58 default:

59 returnNULL;

60 }

The dependencies for the receiver side is the first block (Line 3 to 22 in listing 3.8) and the dependencies for the sender side follows in the next block (Line 23 to 50 in listing 3.8). The dependency DCCPF_SEND_ACK_VECTORfor the receiver has.is_localset to true and .val set to 0. This means that sending of acknowledgement vectors is disabled locally. However, the sender side has.is_local set to false. This means that the sender side does not disable reception of acknowledgement vectors, but if they will be ignored within the underlying congestion control algorithm. On the sender side DCCPF_ACK_RATIO is also set with a value of 0. Both of these dependencies are naturally disabled as acknowledgement vectors are not used in CCID 3 or CCID 5.

The dependency for sending and receiving loss event rate is defined withDCCPF_SEND_LEV_RATE. For both sender and receiver it is enabled and mandatory. This means they both send it and expects to receive it. It is needed on both sides to determine the sending rate.

The last dependency field is Non-Data Packets (NDP) count, represen- ted byDCCPF_SEND_NDP_COUNT. It is used to calculate whether or not data packets was lost during a packet loss situation. This is done by check- ing the NDP count versus the sequence number of the current packet and the last previously received before the loss happened. This dependency is only sent by the sender side and only expected by the receiver side. Fur- ther information about this can be found in section 7.7 of RFC 4340 [7] or in section 6.1 of RFC 4342 [4].

CCID 5 replicated

CCID 5 is now added to the kernel and works in the exact same way as CCID 3. The next step towards a complete MulTFRC implementation is to create library files for MulTFRC and replace all previous references to the TFRC library with them. Any new files also needs to be added inMakefiles to make sure they are included during compilation.

3.1.6 Library replication

The first library file to be replicated are net/dccp/ccids/libs/tfrc.h. A the obvious name for the new file is net/dccp/ccids/libs/multfrc.c. This file consists mostly of linked initiation functions for both sender and receiver parts of a connection. It also contains a linked function to the TFRC equation itself, which can be found innet/dccp/ccids/libs/multfrc_equation.c.

In listing 3.9 these linked functions can be seen. The TFRC function call with input parameters can be found on line number 1. This linked function has to include two more parameters with the MulTFRC implementation.

For now all function links containing *tfrc* will be renamed *multfrc*.

26

(39)

Listing 3.10 shows parts of listing net/dccp/ccids/lib/tfrc.h before file and linked functions renaming.

Listing 3.9: net/dccp/ccids/lib/tfrc.h 1 externu32 tfrc_calc_x(u16 s, u32 R, u32 p);

2 externu32 tfrc_calc_x_reverse_lookup(u32 fvalue);

3

4 extern inttfrc_tx_packet_history_init(void);

5 extern voidtfrc_tx_packet_history_exit(void);

6 extern inttfrc_rx_packet_history_init(void);

7 extern voidtfrc_rx_packet_history_exit(void);

Listing 3.10 shows parts ofnet/dccp/ccids/lib/multfrc.hafter renaming.

Listing 3.10: net/dccp/ccids/lib/multfrc.h 1 externu32 multfrc_calc_x(u16 s, u32 R, u32 p);

2 externu32 multfrc_calc_x_reverse_lookup(u32 fvalue);

3

4 extern intmultfrc_tx_packet_history_init(void);

5 extern voidmultfrc_tx_packet_history_exit(void);

6 extern intmultfrc_rx_packet_history_init(void);

7 extern voidmultfrc_rx_packet_history_exit(void);

The same form of renaming for functions, data structures and variables were done to several more files to create a renamed working copy of the TFRC library.

net/dccp/ccids/lib/multfrc.h net/dccp/ccids/lib/multfrc.c

net/dccp/ccids/lib/multfrc_equation.h net/dccp/ccids/lib/multfrc_equation.c net/dccp/ccids/lib/mulpacket_history.h net/dccp/ccids/lib/mulpacket_history.c net/dccp/ccids/lib/mulloss_interval.h net/dccp/ccids/lib/mulloss_interval.c

Adding CCID 5 and MulTFRC to compilation routine

These new library files needs to be added to the compilation routine. A new configuration test for MulTFRC needs to be added toKconfig. The test simply enables theIP_DCCP_MULTFRC_LIBandIP_DCCP_MULTFRC_DEBUG if CCID 5 (IP_DCCP_CCID5) or CCID 5 debug (IP_DCCP_CCID5_DEBUG) has been enabled withmenuconfig. See listing 3.11.

(40)

Listing 3.11: net/dccp/ccids/Kconfig 1 config IP_DCCP_MULTFRC_LIB

2 def_bool y ifIP_DCCP_CCID5 3

4 config IP_DCCP_MULTFRC_DEBUG

5 def_bool y ifIP_DCCP_CCID5_DEBUG

When theKconfighas been added, thenet/dccp/Makefilemust also reflect this in order to compile these files along with the rest of the kernel. On line 1 and 2 in listing 3.12 the check for the configuredKconfigis added. If DCCP and CCID 5 is enabled for the compilation, all of the files in listing 3.12 will be compiled with the rest of the kernel.

Listing 3.12: net/dccp/Makefile 1 dccp$(CONFIG_IP_DCCP_CCID5) += ccids/ccid5.o

2 dccp$(CONFIG_IP_DCCP_MULTFRC_LIB) += ccids/lib/multfrc.o \

3 ccids/lib/multfrc_equation.o \

4 ccids/lib/mulpacket_history.o \

5 ccids/lib/mulloss_interval.o

Data structure conversion

A data structure file for MulTFRC also has to be added. Theinclude/linux/t- frc.hwill be copied and namedinclude/linux/multfrc.h. This header file also only need to change data structure and variable names to suit the MulTFRC standard set in the library files.

After all of the library and data structure files has been converted to a MulTFRC copy of TFRC, without any of the MulTFRC functionality, the CCID 5 files also needs to use these renamed functions. With the use of the converted named functions the kernel now has a new CCID (CCID 5) and is ready to receive the changes needed to implement MulTFRC.

3.2 MulTFRC implementation

Transformation from TFRC to MulTFRC, following the specifications of

"MulTFRC: TFRC with weighted fairness" [2], can be applied onto the already implemented replica of CCID 3. "MulTFRC: TFRC with weighted fairness" [2] provides sections of specifications that should be applied to the current implementation of TFRC in the Linux kernel in order to convert it into MulTFRC. The Linux kernel implementation of TFRC has been done following "TCP Friendly Rate Control (TFRC): Protocol Specification" (RFC 5348) [5] and necessary changes are pointed out by their representation of section in this document. Changes that are not specified also has to be implemented as a consequence to this transformation.

3.2.1 Changes to section 3 of RFC 5348

This section of changes contains the most important part of the transform- ation to MulTFRC. More precisely, the changes to the algorithm that is

28

(41)

needed to calculate the sending rate of MulTFRC. In addition to the in- put parameters already existing for TFRC this change also requiresN(the number of cumulative flows) andj (the number of packets lost in a loss event). The algorithm specified for TFRC can bee seen in listing 3.13.

Listing 3.13: TFRC algorithm

1 s

2 X_calc = −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

3 R ∗ sqrt(2∗b∗p/3) + (3 ∗ t_RTO ∗ sqrt(3∗b∗p/8) ∗ (p + 32∗p^3)) The already existing Linux kernel implementation of TFRC has made assumptions about input parameters and made changes to the algorithm accordingly. With a few iterations of breaking down this algorithm it has been transformed into a lookup table and a few mathematical operations.

With the use of the MulTFRC algorithm this lookup table is not needed for the calculation ofx_calc. It is however used to determine the value of p, thus it can not be removed. The mechanism for finding p can be seen in listing 3.14. The lookup table is shortened as it consist of 500 rows.

The tfrc_calc_x_reverse_lookup uses the value of fvalue to find the closest corresponding value of p from the lookup table. fvalue is the value of a scaled division ofs(packet size),rtt(round trip time) andx_recv(received sending rate).

Listing 3.14: Reverse lookup to find p

1 /∗∗ TFRC_CALC_X_ARRSIZE equals the value 500, thus 500 rows in lookup table ∗/

2 static constu32 tfrc_calc_x_lookup[TFRC_CALC_X_ARRSIZE][2] = { 3 { 37172, 8172 },

4 { 53499, 11567 }, 5 { 66664, 14180 }, 6 { 78298, 16388 }, 7 { 89021, 18339 }, 8 { 99147, 20108 }, 9 ...

10 };

11 12 /∗∗

13 ∗ tfrc_calc_x_reverse_lookuptry to find p given f(p) 14 ∗ @fvalue: function value to match, scaled by 1000000 15 ∗ Returns closest match for p, also scaled by 1000000 16 ∗/

17 u32 tfrc_calc_x_reverse_lookup(u32 fvalue) 18 {

19 intindex;

20

21 if(fvalue == 0)/∗ f(p) = 0 whenever p = 0 ∗/

22 return0;

23

24 /∗ Error cases. ∗/

25 if(fvalue < tfrc_calc_x_lookup[0][1]) {

26 DCCP_WARN("fvalue %u smaller than resolution\n", fvalue);

27 returnTFRC_SMALLEST_P;

28 }

(42)

29 if(fvalue > tfrc_calc_x_lookup[TFRC_CALC_X_ARRSIZE1][0]) { 30 DCCP_WARN("fvalue %u exceeds bounds!\n", fvalue);

31 return1000000;

32 }

33

34 if(fvalue <= tfrc_calc_x_lookup[TFRC_CALC_X_ARRSIZE1][1]) { 35 index = tfrc_binsearch(fvalue, 1);

36 return(index + 1) ∗ TFRC_CALC_X_SPLIT / TFRC_CALC_X_ARRSIZE;

37 }

38

39 /∗ else ... it must be in the coarsegrained column ∗/

40 index = tfrc_binsearch(fvalue, 0);

41 return(index + 1) ∗ 1000000 / TFRC_CALC_X_ARRSIZE;

42 } 43 44

45 /∗ return largest index i such that fval <= lookup[i][small] ∗/

46 staticinline u32 tfrc_binsearch(u32 fval, u8 small) 47 {

48 u32 try, low = 0, high = TFRC_CALC_X_ARRSIZE1;

49

50 while(low < high) {

51 try = (low + high) / 2;

52 if(fval <= tfrc_calc_x_lookup[try][small])

53 high = try;

54 else

55 low = try + 1;

56 }

57 returnhigh;

58 }

Algorithm

The algorithm used to replace the one of TFRC is recited in listing 3.15.

At first sight it seems like a easy substitute, with only the addition of N andj. Together with N andj, s (packet size), R (round trip time) and p (loss event rate) can be used to obtain the rest of the needed variables. It is assumed for MulTFRC, as it is for TFRC, thatb = 1andt_RTO = R * 4. The kernel does provide functions to domax,minandceil. The kernel does also provide means to calculate all the operands but power of and square root.

These two calculations needs to be added as the kernel provides no native support for either. This poses a potential problem with the implementation.

In the cases where the operation just usedpower of 2(x ˆ2), it is simple enough to swap it withx * x. But then there is still one power of operation where jis the exponent. Asjcan be many different values, a function to support power of has to be made along with a function for square root.

Listing 3.15: MulTFRC initial algorithm 1 If (N < 12) {

2 af = N ∗ (1(11/N)^j);

3 }

30

(43)

4 Else { 5 af = j;

6 }

7 af = max(min(af,ceil(N)),1);

8 a = p∗b∗af∗(24∗N^2+p∗b∗af∗(N2∗af)^2);

9 x = (af∗p∗b∗(2∗afN)+sqrt(a))/(6∗N^2∗p);

10 z = t_RTO∗(1+32∗p^2)/(1p);

11 q = min(2∗j∗b∗z/(R∗(1+3∗N/j)∗x^2), N∗z/(x∗R), N);

12 X_calc = ((1q/N)/(p∗x∗R)+q/(z∗(1p)))∗s;

The Linux kernel does avoid using floating-point arithmetic whenever it is possible, thus integer arithmetic is favoured. This poses a third problem concerning the MulTFRC algorithm. The precision is important to provide an accurate calculation of the sending rate. Unfortunately this precision is acquired by the use of floating-point arithmetic. There are several ways to go at this problem.

Floating-point arithmetic

The use of a Floating Point Unit (FPU) is required to do floating-point arithmetic operations in the kernel. If the system does have a FPU available and the kernel uses it, it might become corrupted for a task running in user space. When the kernel switches context between tasks running in user space, and if used by the task, the FPU is saved along with the rest of the context. This is however not done automatically with system calls within the kernel. kernel_fpu_begin and kernel_fpu_end should be used to prevent the FPU from getting corrupted. kernel_fpu_begin must be called before any floating-point operation and kernel_fpu_endmust be called after.kernel_fpu_beginandkernel_fpu_endare both calls implemented in theasm/i387.hheader file in the kernel.

To use the operations requiring the FPU within the kernel is however not preferred. The main reason is that the kernel should not contain any floating-point arithmetic because not all Central Processing Units (CPUs) supports the use of a FPU. It is therefore a common opinion that any function requiring floating-point arithmetic does not belong in the kernel.

Nevertheless, if it could solve the problem it is worth testing. A simple function were made to test and see how these kernel calls work. It does a simple multiplication of 1.5 * 1.5 = 2.25but if it is interpreted as integers it would be multiplied as 1 * 1 = 1. The kernel does not allow printk to output floats. A conversion to integer has to be made. The right result will therefore be output as 2 instead of 2.25 while 1 would still mean it calculated it with integer values. See listing 3.16.

Listing 3.16: Floating-point arithmetic test function 1 #include<asm/i387.h>

2

3 voidtestFloatingPoint() { 4 kernel_fpu_begin();

5

6 inti;

Referanser

RELATERTE DOKUMENTER

In order to perform reasoning the behaviour models shall have access to data about the simulated environment and react to events in the simulated environment, where the

This report presented effects of cultural differences in individualism/collectivism, power distance, uncertainty avoidance, masculinity/femininity, and long term/short

This would be the case if the control point allows access based on an attribute statement issued by an issuer which it does not trust directly, but which is again trusted by

In the present case, UDFs are used both for extracting information from the turbulent velocity field for input to the model and for calculating the evaporation rate; the

Rate Based end-to-end Congestion Control (RBCC): TCP encounters a number of new challenges when applied in MANETs, such as wireless link error, medium contention, and frequent

Potential individual perceived barriers to using the SMART concept are being understood by analyzing how different factors that hinder and promote the motivation to use SMART

It was recommended that each fish nutrition experDnent have a standard reference diet, a control diet (which may be positive and or negative control) and

In this section a control and supervision architecture for remote control of I&amp;M operations is presented, in addition to a specialized tool – called a lid operation tool – to