• No results found

Iteration two - multiple cache layers and cache server net-

FLOW 47

5.4 Iteration two - multiple cache layers and cache server network flow

From the results of the first iteration, we can see that the fetch time grows rapidly from each cache layer. In advance, we expected the results from the fetch time measurement to give the best results when all cache layers are combined, but after a quick peak at the graphs, we see that the cache server has a fetch time way to high to give a good performance at all. As we already know how much time we can expect each cache level to use on cache hits, it would be interesting to find out how many requests goes to each of the cache layers.

As it turns out, the cache server uses a lot more time than expected when delivering images to the viewers. A closer look at the graphs reveals that most of the time goes into communication between the viewers and the cache server. Rather than combining this cache layer with the others, we will con-duct other experiments on it in order to determine what makes this layer so inefficient.

5.4.1 Combining cache levels

In iteration one, we conducted experiments on each of the cache layers sep-arately and because of this, the experiments resulted in 100% of the cache requests being received in each of the cache layers. When the cache layers are combined, we expect the frame cache to protect the local cache so that very few requests reaches the local cache. If the centralized cache was enabled too, it would be protected by both the frame cache and the local cache, resulting in even less requests being received. The ideal request flow would be the first cache level, or the one being most efficient with respect to the time used, deliv-ering most of the requests. We will conduct experiments measuring the number of requests reaching each cache layer in order to determine how well the frame cache protects the local cache.

In these experiments we will use another input stream more suited for the usage of the local cache. The movement will still be in a rectangular shape, but we will use a smaller area to simulate many short movements over the same area.

As the frame cache will protect the local cache from a lot of the cache requests, the only requests received in the local cache will be the ones missing in the frame cache.

48 C H A P T E R5 E VA LUAT I O N

In the beginning, the local cache will contain little or no data and the first cache requests after the first image load will be registered as a hit in the frame cache and not in the local cache. Therefore, we expect the the local cache hit to decrease during this experiment compared to the cache hit measured from this cache layer separately.

By running the viewer three times with the frame and local cache enabled, increasing the move speed for each time, we collected the data into three graphs as shown below.

Greater move speed increases the data set, which affects both the cache layers.

Requests sent to the frame cache will more frequently result in a cache miss, causing more requests to end up in the local cache. We can see this in Figure 5.8. If we compare the three graphs, we can see the hit ratio in the frame cache decreasing when the move speed is increased. The figure also shows the local cache’s hit ratio being changed more often, as a consequence of the cache miss in the frame cache.

5.4 I T E R AT I O N T WO-M U LT I P L E C AC H E L AY E R S A N D C AC H E S E R V E R N E T WO R K

FLOW 49

Figure 5.8:Cumulative cache hit in percentage with both the frame cache and the local cache enabled. The first graph shows the hit ratio when the image is moving with the lowest speed of 5 p/f, the second with 10 p/f and the third with 20 p/f.

50 C H A P T E R5 E VA LUAT I O N

5.4.2 Cache server network traffic

The cache server is now ruled out when combining cache layers for an optimal fetch time, but it would be interesting to see what makes the remote fetch time to increase when using this cache layer. We will conduct an experiment using the cache server only, where we measure the remote fetch time as we increase the number of participating viewers and requests sent. The number of open connections at once and concurrent request handling might affect the time used on sending the image from this particular cache level.

As the viewer and cache server run on different physical machines with unsyn-chronized clocks, without proper clock synchronization we cannot determine how much time is being used each way. Still, we assume most of the time be-ing used on sendbe-ing data to and from the cache server goes to transferrbe-ing image data in the response, as the response has a much greater size than the request.

During this experiment, the remote fetch time was measured by using three different send techniques. One, where the request for each image fragment in the frames were sent sequentially, another where they were sent concurrently and a third where all image fragments in the frame was bunched into one request. The graph below shows how using these send techniques affects the remote fetch time when increasing the number of participating viewers.

Figure 5.9:Average remote fetch time when sending requests sequentially, concurrent and bunching them while increasing the number of participating viewers.

5.5 I T E R AT I O N T H R E E-F R A M E R AT E 51