Publication
IBM Systems Journal
Paper

Metrics to evaluate vendor-developed software based on test case execution results

View publication

Abstract

Various business consideration have led a growing number of organizations to rely on external vendors to develop software for their needs. Much of the day-to-day data from vendors are not available to the vendee, and typically the vendee organization ends up with its own system or acceptance test to validate the software. The 2000 Summer Olympics in Sydney was one such project in which IBM evaluated vendor-delivered code to ensure that all elements of a highly complex system could be integrated successfully. The readiness of the vendor-delivered code was evaluated based primarily on the actual test execution results. New metrics were derived to measure the degree of risk associated with a variety of test case failures such as functionality not enabled, bad fixes, and defects not fixed during successive iterations. The relationship of these metrics to the actual cause was validated through explicit communications with the vendor and the subsequent actions to improve the quality and completeness of the delivered code. This paper describes how these metrics can be derived from the execution data and used in a software project execution environment. Even though we have applied these metrics in a vendor-related project, the underlying concepts are useful to many software projects.

Date

Publication

IBM Systems Journal

Authors

Topics

Share