• No results found

4.10.1 Driver hookup

The client runs on the host computer. It needs the gtixp [5] driver to be loaded before it starts.

See section 4.3 for how to start the system. The client program opens the file /proc/-driver/ixp0/signal. The gtixp driver checks to see if any program has this file open when it gets an interrupt. If so, the driver sends a SIGUSR1 signal to that program. This is a nice way of getting hardware interrupts to user level programs. The client then just waits for a signal that indicates that there is new data in the SDRAM ring buffer to be read. The way we read the shared SDRAM on the IXP card is to open the file/proc/driver/ixp0/sdram that the gtixp driver made, as shown in figure 4.27.

//gtixp drivers mapping of the IXP cards sdram.

mem_fd = open("/proc/driver/ixp0/sdram", O_RDWR | O_SYNC );

if (mem_fd < 0) {

perror("open(\"/proc/driver/ixp0/sdram\")");

exit(-1);

}

Figure 4.27: How host application opens sdram file

4.10.2 MySQL

The client also connects to a MySQL database to store its entries. Our idea was that instead of trying to come up with a smart idea to store the ended streams, we can just use a normal SQL database. With SQL, we can also make all kinds of data queries. We could also move the database to another computer. (See section 4.2.3). If we got more than one computer with IXP cards that logged traffic and had a really fast SQL server, we could use that one. We do not know much about databases. There might be other databases that are more optimized for storing a lot of entries fast. There are probably ways to tune MySQL to be faster for our application, and maybe ways to set up the tables to make it work faster. This is something that must be given a lot of thought if this logger is going to be used in the real world. We are having this project only to see if an IXP card can be used as a real time logger, so we have not used too much time on the database part.

We are using version 4.1.10a of MySQL, it was the one that the Suse Linux configuration program set up for us. We chose MySQL since it is free, was easy to install on our Suse host computer with its packet manager, and we have used it before. The table we are using has the columns shown in table 4.4.

We need to have a lot of fields as key. The same IP can talk to another same IP address at the same portnumbers at the same protocol, but we not at the same time. As portnumbers roll around, time passes. The SQL server runs on the host computer, which gives us no network restrictions. The host has a regular IDE harddrive, which is probably rather slow. We do not

Stream database table Field Name: Key Description:

iplow y Lowest IP address of stream iphigh y Highest IP address of stream

iplow_srcport y Src. port number of lowest IP address for TCP/UDP, ID for ICMP iplow_destport y Dest. port number of lowest IP address tor TCP/UDP, 0 for ICMP protocol y Protocol for stream

iplow_int Physical interface of lowest IP iphigh_int Physical interface of highest IP stime y Time that stream started

etime Time that stream ended

bytes_iplow Bytes from iplow to iphigh for TCP/UDP, packet types for ICMP bytes_iphigh Bytes from iphigh to iplow, 0 for ICMP

packets_iplow Packets from iplow to iphigh packets_iphigh Packets from iphigh to iplow iplow_started Is one if lowest IP started stream

Table 4.4: The fields and keys in the SQL database

know if this is limits the performance for our application though. The default Storage Engine was MyISAM which claims to be: “Very fast, disk based storage engine without support for transactions. Offers fulltext search, packet keys, and is the default storage engine.” Sounds good to us, so we kept it that way. If it should be slow, we can choose MEMORY(HEAP) Storage Engine. It says it is faster but can loose data if the server goes down.

4.10.3 Program flow

When we get a signal, we read first the HOST_LAST_WRITTEN and HOST_LAST_READ variable, see section 4.9. From figure 4.28 we see how we can read 4 variables from IXP SDRAM in one PCI transfer. From them we know the first entry to read and how many there are to read. Knowing where from and how long to read, we read in all the entries and converts

int readin[4];

//We read in last_written, last_read, xscale_load and ME_prcs_cnt in one pci read.

lseek(mem_fd, HOST_LAST_WRITTEN , SEEK_SET);

read(mem_fd, readin, 16);

Figure 4.28: How host application reads shared SDRAM variables

them to little endian as we go. Remember that we also have a little endian version of the stream_tablestruct to help the little endian CPU get things right.

Done reading, we need to update theHOST_LAST_READvariable, so that we do not read the same entries again, and the XScale knows that the client is done with the entries so we can reuse them. Since the IXP card forgets its date and time, this program helps it out. We read the date and time on the host computer and write it into theHOST_DATETIMEvariable in SDRAM. (See section 4.9.)

For monitoring purposes, this program shows the load on the XScale and the host and the number of disk I/Os in progress. We also get the number of packets in process by the micro-engines. We can use it to monitor SRAM congestion. The idea is that if the SRAM memory channels are being overloaded, the microengines will not be able to finish their packets, and the counter will reach the number of available threads. We do not think that this will be a problem.

We get the XScale load from theXSCALE_LOADvariable in SDRAM. This is just a tool to see if there might be any congestion.

4.11 Summary

We see that the system has a lot of different parts. Each packet goes through the microengines that make entries for each stream in the SRAM stream tables and updates them as new packets arrive. The XScale goes through the stream tables periodically to see if there are finished streams or if there is too long since the data was updated. It copies the finished entry to the SDRAM ring buffer and interrupts the host computer when there are enough done streams, or entries that are due for an update. The host computer kernel signals the client application when it receives the interrupt from the IXP card. The client copies the entries from the IXP card’s SDRAM ring buffer over the PCI bus with help from the kernel device driver. The client lastly makes an SQL call to the MySQL database to record the information of the stream.

We can alter the configuration some. We can use the forwarding version, if we have to forward the packets. There can be one or two microengines handling the incoming packets if we use this RX block. The other option is to use the mirror version, if we have a switch with a mirror port and do not want to alter the network traffic.

We can have multiple SQL servers that we use if that should be a performance problem. For now we have the MySQL server on the host computer. If the host computer gets slow, we could get a host with multiple CPUs. The host computer, a Dell Optiplex GX260 has 32bit 33MHz PCI bus. It might help to get a host computer with 64bit, 66MHz PCI bus, or better.

There are a lot of things to figure out to make it all to work, and it also has to work fast enough. In the next chapter, we will do some testing to see how it performs.

Chapter 5 Evaluation

5.1 Overview

This is the chapter for testing our logging system. We have first tested the bandwidth over the PCI bus for the gtixp [5] device driver. How many entries we could write from the XScale over the PCI bus and into the host computer’s database per second is also tested. There are also some tests to see how many contexts and microengines are needed. Section 5.5 describes how the XScale program can be tuned for different scenarios. We start with measuring the PCI bandwidth. Thus, we first present some benchmarks looking at the PCI transfer, database and microengine performance before evaluating the systems ability to monitor gigabit traffic in real time with a real live test.