• No results found

Deadline Aware Less than Best Effort (DA-LBE)

LBE does not consider any notion of timeliness requirements when yielding for traffic of differing urgency. LBE services could be starved by BE flows, causing the LBE services to yield indefinitely. This can be remedied by adding the notion

of time to LBE, termed as Deadline Aware Less than Best Effort Services (DA-LBE). Adding notion of time to an LBE service will allow it to adjust how early it reacts to network congestion, taking into consideration the amount of time until the soft deadline. A DA-LBE service will be able to conform to standard TCP BE services if it needs to in order to complete within the soft deadline.

A DA-LBE traffic flow, should (list used from [DA-LBE]):

• Be no more aggressive than BE traffic

• React appropriately to network congestion

• Take advantage of available network capacity when there is no congestion

• Attempt to finish transmitting its data by the deadline

DA-LBE will initially have maximum LBEness and will decrease its LBEness (increase aggressiveness) with respect to the closeness of the deadline for the transmission.

2.3.1 Becoming Deadline-Aware

The deadline-aware part of DA-LBE is provided by being able to dynamically adjust its aggressiveness based on the notion time. Being able to adjust aggressiveness gives DA-LBE the ability to trade aggressiveness in order to meet a certain deadline. The deflation of congestion price could be necessary in particular flows in order to meet the deadline. Increasing the transmission rate could be beneficial in the case of Vegas having the need to compete more fairly towards a network streams running Cubic. This trade-off, can be controlled using the DA-LBE framework, based on configuration on (list used from [DA-DA-LBE]):

• the size of the data to transfer

• the soft completion time for the transfer

This configuration is to be configured through a user-space API. By adjusting the perceived congestion price based on the soft completion time, the transmission rate can be dynamically controlled.

2.3.2 Imposing LBE behaviour on arbitrary CCs

Different CCs use different values to indicate congestion, such as loss and delay.

These CC specific prices will have different impacts on the transmission rate (i.e., Vegas CC reacts faster to congestion indication). Different CCs may provide different trade-offs. Different congestion prices must be mapped to a common price measure. This price measure could then be adjusted, regardless of CC, giving the same amount of relative change in congestion price. This adjustment is achieved by inflating or deflating the congestion price. Having a common price measure of congestion, and having the ability to modify this value, will make it possible to dynamically adjust the level of LBEness imposed on the CC in use.

The inflation of the congestion price allows the flow to achieve a lower relative share of capacity, whereas deflation of the congestion price allows the flow to achieve a higher relative share of capacity. DA-LBE will generally need to inflate the congestion price in order to reduce the relative share of capacity.

Inflating the congestion price for a loss-based CC could be achieved by dropping packets (packet loss); however, this triggers a retransmission. A better approach is to be able to reduce the transmission rate when receiving an ECN signal. An ECN signal results in reduction of the congestion window without dropping packets.

However, ECN signals require others to notify the sender that it should reduce its transmission rate. DA-LBE should be able to artificially generate an ECN event.

The mechanism is termed as a phantom ECN event. Phantom ECN signals are the easiest way to inflate the congestion price without the need for retransmissions, and they provide the same congestion window reduction as a real ECN signal would.

However, the phantom ECN does not send the CWR response to the receiver.

A disadvantage of the phantom ECN mechanism is that it prevents taking advantage of short periods of decreased congestion in the network. The Phantom ECN mechanism does not by itself detect congestion diminishment very well. In order for phantom ECNs to halt when congestion diminishes, to be able to make use of short periods of increased available capacity, an additional mechanism must be introduced. An average congestion indication time interval that is accompanied with the phantom ECN. This time interval measures the average time between the reception of congestion indications. When the time interval drops below a certain limit (user-defined), generation of phantom ECNs will halt, and thus the available capacity will be able to be utilized.

Phantom ECNs are supported by all CCs, regardless of price measure. Since delay-based CCs react faster to network congestion, DA-LBE adds support for artificially adjusting the queueing delay, termed as phantom delay. This is achieved by

adjusting the measured RTT. Increasing values of RTT gives the CC algorithm in use the impression that the network has increased congestion, leading to reduction to the transmission rate. Similarly, reducing the values of RTT leads to increased transmission rate.

Some delay-based algorithms have a slightly different measure of congestion price (measurement of delay). TCP Vegas is one CC algorithm which has a more fine-grained measurement of delay (congestion price). In order to support TCP Vegas, which uses the difference between measured RTT and base RTT, there must be support for modification to both of these. When the congestion price is inflated or deflated, it is accompanied with a control variable (set from user-space) stating whether or not it is delay-based and whether or not it operates with the base RTT.

DA-LBE’s proposed framework was originally designed to change the alpha and beta parameters of Vegas. Changing the alpha and beta parameters of Vegas requires modifying the source code of Vegas, because the variables where alpha and beta are configured is not exposed outside of the implementation of Vegas.

This means that in order to be able to modify the alpha and beta parameters, the implementation of Vegas would need to be rewritten to support accessing and modifying the alpha and beta parameters outside of the Vegas CC kernel module.

Though this could in theory be done, it is something that should be avoided in order to be able to get the DA-LBE framework code accepted into the Linux Kernel. In addition to the need for being able to directly modify Vegas, it also means that all other delay-based CC algorithms would need to be altered in order for DA-LBE to impose congestion price adjustment on that algorithm. This is not ideal. Instead, the approach chosen in the DA-LBE framework is altering the input that delay-based protocols evaluate in order to decide the queueing delay (network congestion), namely, the measured RTT. Modifying the perceived RTT gives a more fine-grained control compared to phantom ECN.

Delay-based CC algorithms detect network congestion earlier than loss-based CC algorithms. This is because delay-based algorithms can effectively achieve a constant window size, thus avoiding the oscillation inherent in loss-based algorithms (sawtooth pattern). In a network where the only protocol in use is delay-based, loss can be avoided (loss is inefficient). In a network where the bandwidth is shared between both delay-based and loss-based algorithms, the loss-based usually gets a greater share of the bandwidth, as delay-based algorithms are usually less aggressive.

As delay-based CCs adjust their transmission rate based on the perceived queueing delay, the transmission rate can be controlled by inflating or deflating the price in the network. The congestion price is based on the RTT, and that is what is being

adjusted in order to achieve the desired result. The amount of deflation or inflation is set by a variable which defines the percentage of adjustment. This means that a value of 100 percent is equal to the existing congestion price. Lower than 100 percent will deflate the congestion price, and a value higher than 100 percent will inflate the congestion price.

2.3.3 The NEAT system

The NEAT-system, is a A New, Evolutive API and Transport-Layer (NEAT) Architecture for the Internet. NEAT is funded by the European Union’s Horizon 2020 research and innovation programme. NEAT wants to change how applications are built in regard to the network stack. Today, the most predominant network protocols in use are TCP and UDP. Instead of forcing a programmer to decide upon which protocol the application should use, NEAT wants the user to specify its needs in regard to the network (i.e. guaranteed delivery, low overhead, etc.). Based on the configured requirements from the application programmer, the NEAT-system makes an educated choice on what underlying protocol to use, giving the application the ability to be unaware of what underlying protocol is being used, as all communication goes through the NEAT API.

Figure 2.7:NEAT Architecture (used with permission from [NEAT])

The DA-LBE framework is one of many contributions to the NEAT-system.

2.3.3.1 Role of DA-LBE in NEAT

A part of the NEAT-system is the ability to reduce/adjust the transmission rate of a network flow running the TCP network protocol. The ability to adjust throughput is important to support Less than Best Effort services to the application. The DA-LBE functionality will allow the NEAT-system to adjust the transmission rate of an arbitrary CC algorithm, in order to impose LBE behaviour. The goal is for the NEAT API to provide LBE mechanisms which can be configured as a system-wide configuration or a on a per-socket basis.

Through the NEAT API, the application programmer will be able to specify specify a soft deadline in order to provide dynamically level og LBEness in regards to the estimated completion time and the soft deadline.