• No results found

Memcached clients support various features, and implements support for the entire memcachedapi. Diggi agents are light-weight, therefore, porting client libraries such as libmemcached will bloat the library OS. Instead we implement a minimal interface to the memcachedapi. In order to have agent-to-agent queries, we implemented a subset of the memcached operations and features which are:

Set: Unconditionally store a value associated with a given key. As opposed to Add which will fail if the item already exists.

Get: Retrieve a value associated with a key.

Delete: Delete the item with the specific key.

Multiple Servers: Allow the client to located keys in multiple SecureCached instances.

Consistent hashing: Consistently distribute the key space among Secure-Cached instances.

Our memcached client implementation exclusively uses the memcached binary protocol ³.

The client follows Diggi’s asynchronous callback model, with the exception of the support for asynchronous operations combined with blocking response calls. That is, to support Yahoo! Cloud Serving Benchmark (ycsb) the

mem-3. https://github.com/memcached/memcached/wiki/BinaryProtocolRevamped

4.9 S U M M A R Y 45 cached client has to support synchronous operations. An example of a set operation followed by a blocking call to retrieve the response is depicted in listing 4.5. The put operation will complete the sending of the request, and getResponsewill yield to the Diggi scheduler until a the response for that request is received.

Code Listing 4.5:Example of an asynchronous put operation to memcached fol-lowed by a synchronous retrieval.

enum memcached_status status;

// Nonblocking call

status = client>put("key", 3, "value", 5);

struct memcached_response *resp;

// Blocking call waits until response is received.

status = client>getResponse (&resp);

if (status != MEMCACHED_STATUS_NO_ERROR) { // query failed

}

Since message delivery in Diggi deletes message after the registered deliv-ery callback returns, the synchronousgetResponsecopies the message inter-nally.

The memcached client also supports querying multiple memcached servers.

Since half the memcached logic is placed on the client side, where the client is responsible for discovering and choosing the distribution of the key space.

When a client is set up to support multiple memcached servers, every request has to be issued to the memcached instance responsible for a particular key.

The client supports two methods of choosing the key space, both involving hashing the key. The first method uses the key hash modulo the number of memcached servers.

To identify memcached instances in a Diggi deployment, as we have no support for ip or ports, Diggi maintains a list of its agents and where they are located from which we can detect the memcached instances.

4.9 Summary

This chapter has presented SecureCached, a modified version of memcached to Diggi. This was achieved by making modifications to memcached and im-plementing os-services in Diggi. Notably, SecureCached may run in single-threaded context by emulating the runtime behaviour of memcached. The

46 C H A P T E R4 D E S I G N&I M P L E M E N TAT I O N

extentions to the Diggi libraryoswere implementations ofposixsockets and pipes, a subset of libevent, and an event framwork that unifies file descriptor types.

5

Evaluation

In this chapter we evaluate the SecureCached. Before we assess SecureCached in a distributed setup, we evaluate the communication throughput of Diggi to find how Diggi compares to native Linux sockets. We evaluate SecureCached in three different set-ups: between agents in the same Diggi process, between Diggi processes on a single machine, and between Diggi processes located on separate machines.

5.1 Experimental Setup

Four different machines were used to generate load, and one to run Secure-Cached. Machine 1 & 2 have identical hardware specifications: Intel Core i5-6500 3.20 GHz Quad-Core processor with 4 logical cores, each core has a separate 32 x 4 way 32KB L1 data and instruction cached and 4 x 256 KB way L2 caches, and all core share a 6MB 12-way cached. Each processor is connected to 16 GB of DDR3 RAM with a front bus of 1600MHz. Both machines run Ubuntu 16.04 LTS with Linux kernel version 4.13.0-47.

Machine 3 & 4is a Dell PowerEdge R330. Is is equipped with a Intel Xeon E3-1270 v5 processor with a base frequency of 3.6 GHz. The processor has 4 physical cores and 8 hyperthreads. Each core has a separate 64 x 8 way 32K L1 data and instruction cached, a 4 x 256 KB 4 way L2 cache, and all cores share a 8MB L3 cache. The machine runs Ubuntu 16.04 LTS with Linux kernel version

47

48 C H A P T E R5 E VA LUAT I O N

Table 5.1:Measure latencies from all machines used for load generation to the machine that runs SecureCached

4.4.0-119.

Machine 5is used to run SecureCached and is an Intel server blade S1200SP.

It is equipped with a Intel Xeon E3-1270 v6 processor running at 3.8GHz. Each core has a seperate 64 x 8 way 32K L1 data and instruction cached, a 4 x 256 KB 4 way L2 cache, and all cores share a 8MB L3 cache. It has 64 GB of DDR4 RAM running at 2133 MHz. The machine runs Ubuntu 16.04 LTS with Linux kernel version 4.13.0-37.

All machines are connected by a 1Gbps Ethernet link, however, they are con-nected to different network typologies. We therefore measure the Round-trip Time (rtt) latency from machine 1-4tomachine 5. The measuredrtt la-tency is listed in the Table 5.1. Therttis measure with thepingcommandline utility. The packet size for the measurents is set to 1Kb since all evaluation of SecureCached is done with value sizes of 1Kb.

The enclaves are created using Intel’s open source kernel module ¹. Unless specified, all enclaves are compiled and run in hardware mode with the SGX_PRERELEASE flag. All code is compiled with GCC version 5.4.0. Note that all components used in the experiments, namely SecureCached and client agents, run onsgx-enabled hardware, and thus, within enclaves.

5.1.1 YSCB

ycsb is an open source benchmarking tool by Yahoo [45]. Its purpose is to create a reference benchmark for popular data serving systems, ranging from databases to in-memory key-value stores, by defining a set of core workloads that covers a wide range of use cases. We evaluate SecureCached using the popularycsb load generator implementet as a Diggi agent. The agent first pre-loads the memcached instances with key-value pairs, and then issues updateorgetrequests. The workloads are according to the pre-definedycsb

1. https://github.com/intel/linux-sgx

5.2 S I N G L E T H R E A D P E R F O R M A N C E 49