One of the major issues in the Internet is the performance management for network system usability. To improve network system quality of service, system administrators should evaluate how their systems are working, and should operate their systems to perform users' requests and optimize performance. To manage network system performance, it is important for administrators to be aware of system usability factors such as access delay, processing time, data transfer throughput, and so on.
Several ways to measure network performance have been developed so far. However, performance management is still difficult because of the lack of effective tools to evaluate network system usability.
The goal of our work is to develop a new performance evaluation tool for network system usability. In our approach, we measure system usability and performance through mechanisms installed in the clients. The purpose of our system is to make clear the behavior of the client applications, and allows us to measure system performance for client usability.
In this paper, we discuss the fundamental principles for client observation and its performance indices, and the design and implementation of the performance evaluation system through observation. Also, we confirm the effectiveness of our system through experiments.
Currently, various applications provide useful services in the Internet. As these applications are actively used, the network management usability becomes a major issue in the Internet. To improve network system quality of service, system administrators should evaluate how their systems are working, and should operate their systems to perform users' requests and optimize performance. To manage network system performance, it is important for administrators to be aware of system usability factors such as access delay, processing time, data transfer throughput, and so on.
Several ways to evaluate the performance of network systems have been developed so far.
Statistical analysis is applied to the activity logging by the servers. This is a popular way to evaluate server performance. However, this analysis cannot provide any hints on network links and the clients.
The benchmark is another performance measurement method. It can provide various indices of server performance. However, the benchmark requires a special environment, and the results are valid only for that environment.
Network monitoring allows us to evaluate network usage at the datalink level. However, performance indices in the datalink layer are not always related the application performance. This is because the application level performance includes not only the characteristics of the datalink, but also many other performance factors.
The goal of our work is to develop a new usability performance evaluation tool. In our approach, the performance through measurement mechanisms installed in the clients is the indices the systems' usability. In our approach, we measure system usability and performance through mechanisms installed in the clients.
To evaluate network system performance from the point of view of usability, the system administrators must know how their services are working and must improve them to satisfy user requests. Network system performance with regard to usability is determined by how the client provides performance to the user, that is, the system administrator should be aware of client system performance factors such as:
Let's consider a common framework to evaluate the performance of the end-point application. The evaluation tool should have the following functions:
Several tools have been developed to evaluate network performance.
Accordingly, we need a new performance evaluation tool for network systems that can be applied to actual systems operation under various configurations.
The goal of our performance evaluation method is to provide a suitable, common framework to measure network system performance usability. We do this by measuring the actual network system performance from the users point of view.
Figure 1: Fundamental concept of performance measurement
The proposed method is to observe the behavior of the client applications, as shown in figure 1. To access the network facility and communicate from a client application to a server, the applications always use the system call in the Operating System (OS). A layer attached to observe the applications, called "Observation layer", reveals all system call-level behavior in the client applications.
Because the observation layer can observe the client applications directly, the observation layer measures the nearest result that affects the users.
Moreover, the observation layer interfaces are the same as the standard Application Programming Interface (API) such as BSD Socket, Transport Layer Interface (TLI), and Winsock. In other words, the applications can access the network via the observation layer through the standard system calls without modification.
In this section, we describe the target communication model for measurement through the observation layer.
In popular Internet application protocols such as Hypertext Transfer Protocol (HTTP) , Simple Mail Transfer Protocol (SMTP) , Post Office Protocol 3 (POP3) , and so on, the clients act by following procedures that are called the "Request-Response Communication Model."
Figure 2: Communication flow in HTTP
For example, figure 2 shows the communication flow diagram in the HTTP. The client sets up the connection, and sends the server the request as GET, PUT and POST, then receives the results that the server processed.
The observation layer can measure various performance indices by observing the system calls. Figure 3 shows the performance indices based on the Request-Response Communication Model. The contents of the measurement are the time period of T1-T7 and the amount of data transferred.
Figure 3: Performance indices
Our system is able to measure following performance indices:
Figure 4: System model
Figure 4 shows the system model of the performance evaluation system through client observation. The system consists of four components: an observation layer, a data slicer, a database, and a manager.
The observation layer monitors the system calls between the applications and the OS.
The observation layer has proxy system call functions. The proxy system call functions stands for the application instead of the actual system call functions in the OS. When the application calls the system calls, the proxy system call procedures in the observation layer are invoked. Then the observation layer records time of each state (T1-T7), counts the amount of data transferred in each TCP connection, and simultaneously, invokes the actual system call in the OS.
The data slicer processes the measured raw data in the observation layer with the conditions such as IP address and the port number, and performs the basic analysis.
The data slicer writes the processed data to the database. This database is used for further analysis by the manager.
The database consists of a matrix into which are put the server address, port number, time elapsing in each state, data size, and transfer rate in order time stamp. The structure of the database is quite simple because a data entry for each TCP connection forms a single row of data. Therefore, it reduces the amount of overhead for measurement in the clients.
Furthermore, the database contains an improved multidimensional matrix to comply with future analyses.
The manager collects the measurement data from the target of the remote clients. It analyzes the data and provides a graphical intrepretation of the results. Currently, the manager is implemented as the interface to GNU Plot to view the analysis results.
We are implementing the performance evaluation system for Unix platforms. The observation layer, data slicer, and database are implemented in ANSI C on Sun Solaris 2.6. The manager is implemented in shell programs and C programs.
The observation layer has the proxy BSD Socket interface for invoking from the target application.
The proxy interface was implemented as a shared library that uses the dynamic linking. It enables the replacement of the actual Socket interface in the OS with the proxy Socket interface in the observation layer when the application is executed dynamically. Therefore, no modification of the application programs is needed for the measurement. Even if the target system does not have the dynamic linking facilities, we do not have to modify the applications; we recompile/link to the observation layer module as a static library.
The proxy Socket interface is invoked when the application calls the Socket system call to access the network. The observation layer monitors the connection open, close, write, and read, records the time elapsed, and counts the data transfers. Continuously, it calls the real Socket system call in the OS.
In our current implementation, our own unique protocol collects measurement data between the manager and the target of the remote clients. In future work, we plan develop a protocol that corresponds to standard protocols such as SNMP.
Our system records the time and counts the data when the application transfers data to/from the network. This procedure affects the performance of the system itself. To evaluate the capability of our system, we compared the throughput with the observation layer and one without it.
|Server||Sun Ultra 60 (two UltraSPARC-II 360MHz processors) (Solaris 2.6)|
|Client||Sun Ultra 10 (UltraSPARC-IIi 333MHz processor) (Solaris2.6)|
|Server program||Apache 1.3.4|
|Data size||51200 bytes|
Figure 5: Configuration of experiment
The target system is configured on an Apache HTTP server , a Sun Ultra 60 workstation and an ApacheBench client program on a Sun Ultra 10 workstation. These devices are connected on 100Base-T Ethernet as shown in table.1 and figure 5.
Figure 6: Throughput with/without observation layer
Figure 6 shows the average throughput (KB/sec) per connection when the data were transferred concurrently. The result shows that the overhead gets to 5 percent or less on average. The overhead could be consider to ignore in the 100Mbps network.
Almost all of the workgroup-scale networks have been configured to the 100Base-T or 10Base-T network. Therefore, our system can apply to workgroup environments.
The statistical analysis of the server access log was used to evaluate performance thus far. We compared the differences between the analysis results of the server access log and the measurement of our system.
Figure 7: Transaction time by observation layer and server log
The configurations of the target system are also shown in Section 5.1. Figure 7 shows the process time elapsed (milliseconds) from the server log and the Transaction Time (Tt) (ms) in our system.
The result of the server log is less than one of our system, and the maximum difference reaches 40 percent.
The Tt in our system is the entire time elapsed in the transaction, including the time for the connection setup, data transfer, and connection close procedures in the Socket layer. Comparatively, the server log is recorded only the server side application activity. Therefore, the server log cannot measure the performance regarding connection setup/closing. Moreover, the server log is the performance record in the application level. Once the data are put into the buffer in the OS, before the actual transmission, it appears that the data transfers are done early in the server programs.
In this paper, we described why a new method for network system performance evaluation is needed. Our proposal is to observe the behavior of the client application and exactly measure its performance. Our system can effectively evaluate the network system performance because the client application stands on the end-user side directly.
Also, we applied our system to the actual WWW system and confirmed its effectiveness through the experiments.