IBM's Software Testing Track 2005 November 14, 2005 Organized by IBM Research Lab in Haifa, Israel
Software Testing Track Abstracts
Production-Testing of Embedded Systems with Aspects
Jani Pesonen, Nokia Corporation Technology Platforms, and Mika Katara and Tommi Mikkonen, Institute of Software Systems, Tampere University of Technology
A test harness plays an important role in the development of any embedded system. Although the harness can be excluded from final products, its architecture should support maintenance and reuse, especially in the context of testing product families. Aspect-orientation is a new technique for software architecture that should enable scattered and tangled code to be addressed in a modular fashion, thus facilitating maintenance and reuse.
However, the design of interworking between object-oriented baseline architecture and aspects attached on top of it is an issue that has not been solved conclusively. For industrial-scale use, guidelines on what to implement with objects and what to implement with aspects should be derived. In this paper, we introduce a way to reflect the use of aspect-orientation to production testing software of embedded systems. Such a piece of a test harness is used to smoke test the proper functionality of a manufactured device. The selection of a suitable implementation technique is based on the variance of devices to be tested, with aspects used as means for increased flexibility. Towards the end of the paper, we also present the results of our experiments in the Symbian OS context that show some obstacles in the current tool support that should be addressed before further case studies can be conducted.
Assisting the Code Review Process Using Simple Pattern Recognition
Eitan Farchi, IBM Haifa Research Lab, and Bradley R. Harrington, IBM Systems and Technology Group, Austin, Texas, USA
We present a portable bug pattern customization tool defined on top of the Perl regular expression support and describe its usage. The tool serves a different purpose than static analysis, as it is easily applicable and customizable to individual programming environments.
The tool has a syntactic sugar meta-language that enables easy implementation of automatic detection of bug patterns. We describe how we used the tool to assist the code review process.
Invited Talk: The Interface of Testing and Compiler Research
Dan Quinlan, Lawrence Livermore National Laboratory
ROSE, a Department Of Energy (DOE) project, is a robust, source to source, open source C and C++ compiler infrastructure targeted at supporting tools and optimization for large-scale scientific applications (million plus lines of code). In developing ROSE, special attention has been given to the target audience, which is not expected to have a compiler background.
In this talk, we explain how ROSE can be used to support the development of testing tools and the general analysis of large-scale applications. Numerous tools are relatively easy to build using ROSE. We will explain the features of ROSE that we identified as especially useful for testing. We will outline, using examples, some of the different testing tools that are possible to build.
We hope that this is a start of a stronger collaboration between the compiler and testing communities that will result in better testing tools and in improving ROSE with new features useful in the testing domain.
Invited Talk: Software Model Checking: Where It Is and Where It's Heading
Scott D. Stoller, State University of New York at Stony Brook
The growing importance of model checking in hardware verification and the difficulty of producing correct software are driving a growing interest in model checking of software. This talk begins with a very brief introduction to model checking, surveys some success stories in software model checking and the main technical ideas that underlie them, and then describes several current research directions in the software model checking community, including partial-order reduction, heuristic-guided search, combining symbolic execution and model checking, concrete model checking, heap abstractions, and environment modeling.
Effective Black-box Testing with Genetic Algorithms
Mark Last and Shay Eyal, Department of Information Systems Engineering, Ben-Gurion University, and Abraham Kandel, Department of Computer Science and Engineering, University of South Florida
Black-box (functional) test cases are identified from functional requirements of the tested system, which is viewed as a mathematical function mapping its inputs onto its outputs. While the number of possible black-box tests for any non-trivial program is extremely large, the testers can run only a limited number of test cases under their resource limitations. An effective set of test cases is the one that has a high probability of detecting faults presenting in a computer program. In this paper, we introduce a new, computationally intelligent approach to generation of effective test cases based on a novel, Fuzzy-Based Age Extension of Genetic Algorithms (FAexGA). The basic idea is to eliminate "bad" test cases that are unlikely to expose any error, while increasing the number of "good" test cases that have a high probability of producing an erroneous output. The promising performance of the FAexGA-based approach is demonstrated on testing a complex Boolean expression.
Optimal Algorithmic Debugging and Reduced Coverage in Search in Structured Domains
Yosi Ben-Asher, Igor Breger, and Ilia Gordon, Computer Science Department, University of Haifa, and Eitan Farchi, IBM Haifa Research Lab
Traditional code-based coverage criteria for industrial programs are rarely met in practice due to the large size of the coverage list. In addition, debugging industrial programs is hard due to the large search space. A new tool, REDBUG, is introduced. REDBUG is based on an optimal search in structured domain technology. REDBUG supports a reduced coverage criterion rendering the coverage of industrial programs practical. In addition, by using an optimal search algorithm, REDBUG reduces the number of steps required to locate a bug.
REDBUG also combines testing and debugging into one process.
SLAM and Static Driver Verifier: Technology Transfer of Formal Methods inside Microsoft
Thomas Ball, Senior Researcher, Testing, Verification, and Measurement, Microsoft Research (MSR)
The SLAM project originated in Microsoft Research in early 2000. Its goal is to automatically check that a C program correctly uses the interface to an external library. The project uses and extends ideas from symbolic model checking, program analysis, and theorem proving in novel ways to address this problem. The SLAM analysis engine forms the core of a new tool called Static Driver Verifier (SDV) that systematically analyzes the source code of Windows device drivers against a set of rules that define what it means for a device driver to properly interact with the Windows operating system kernel. I will discuss the history of the SLAM project and comment on the technology transfer of formal methods and software tools inside Microsoft.
Joint work with Sriram K. Rajamani, Byron Cook, and Vladimir Levin.
Keynote: Robust Embedded Firmware Development
Thomas Wolf, Program Director, eServer Firmware Development, IBM Systems and Technology Group, Boeblingen, Germany
Embedded firmware has evolved from small footprint assembler programs to complex software applications. Today, it is pervasive and distributed in all kinds of consumer and high-end industry products. For some applications, the firmware development process is often more demanding than customary modern software engineering processes. It is also more demanding when it comes to meeting quality and time-to-market requirements, as well as real-time behavior and memory footprint constraints. The following talk introduces a firmware application as used in highly reliable servers and a robust, front-end loaded, development and verification process that yields high firmware quality very early in the development cycle.
Invited Talk: Introducing Test-driven Development
Yael Dubinsky, Technion - Israel Institute of Technology
Test-driven Development (TDD) is a programming technique that aims at providing high quality code. In the last few years, this technique was raised again as part of the agile method Extreme Programming (XP) that provides a framework of values and practices that among others support TDD. The introduction of TDD in software processes involves changes in the way developers program their code and feel about their work. In this talk, the position of TDD is described as one of the software development practices, and the challenge of introducing it in software teamwork is presented.
Why Johnny Can't Test and Can Test-driven Development Help: A People-centered Analysis of Testing
Orit Hazzan, Technion - Israel Institute of Technology
In this talk, the activity of testing is examined from social, cognitive, affective, and managerial perspectives. I analyze why testing is skipped in many traditional software development processes and illustrate how test-driven development may help cope with these obstacles. I conclude with explaining how test-driven development fits into agile software development environments in general and Extreme Programming in particular.
Benchmarking and Testing OSD for Correctness and Compliance
Dalit Naor, Petra Reshef, Ohad Rodeh, Allon Shafrir, and Adam Wolman, IBM Haifa Research Lab
This work challenges that notion. We describe tools and methodologies crafted to test object-based storage devices (OSDs) for correctness and compliance with the T10 OSD standard.
A special consideration is given to test the security model of an OSD implementation. Additionally, some work was carried out on building OSD benchmarks. This work can be a basis for a general-purpose benchmark suite for OSDs in the future, as more OSD implementations emerge. The tool described here has been used to verify object-disks built by Seagate and IBM Research.
Invited Talk: Directed Model Checking
Bernd Finkbeiner, University of Saarlandes
Verification by model checking requires a complete traversal of the system state space, which is often too expensive for real-life designs. Frequently, however, the goal is to find the error in an incorrect system, rather than to prove the absence of errors in a correct system. With a good search strategy, finding the error may only require exploring a small portion of the state space.
In directed model checking, the search for the error is guided by an estimate of the distance from the current state to the nearest error state. In this talk, I will survey some success stories of directed model checking and report on our experience with deriving estimates from abstractions. I will present a distance-preserving abstraction for concurrent systems that allows one to automatically compute an estimate of the error distance without hitting the state explosion problem.
Joint work with Klaus Drager and Andreas Podelski.
| |
|