About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
Performance Evaluation
Paper
Differentiating the performance of systems more reliably
Abstract
The system of measuring the performance of a Web system using a workload generator can be modeled as a closed interactive system. In such a system, the throughput and the mean response time are related by the response time law. However, we find that a measured throughput and a corresponding measured mean response time can have significantly different accuracy. As a result, one metric may be more reliable than the other to identify the better of two given configurations of a Web system, which is an important problem that appears frequently in practice. Using simulation, we derive rules of thumb that characterize when throughput is more reliable than mean response time. Also, we explain these rules of thumb analytically. Specifically, we refine the response time law using the central limit theorem and formally define the asymptotic reliability of an estimator of a metric. Using these analytical frameworks, we provide insights into when and why one metric is more reliable than the other. © 2010 Elsevier B.V.