IBM Israel
Skip to main content
 
Search IBM Research
   Home  |  Products & services  |  Support & downloads  |  My account
Select a Country Select a country
IBM Research Home IBM Research Home
IBM Haifa Labs Homepage IBM Haifa Labs Home
IBM Haifa Labs Homepage IBM Haifa Labs Leadership Seminars

Verification Seminar 2003

Introduction
Program
Abstracts
Visitors information
Confirmed participants
Verification in HRL
Feedback


Abstracts
Predictable Design by Pre-Implementation Validation of Non-functional Properties (Abstract)
Wolfgang H. Nebel, Oldenburg University, OFFIS and ChipVision Design Systems

Design cost is determined by the engineering expenses needed to implement chips of more than 400 Million transistors. In average the estimated design time is exceeded by more that 50% due to unexpected design respins. These become necessary when at late design phases the analysis of design properties reveal that non-functional design constraints are not met.

The presentation shows how these loops can be avoided by an early consideration and estimation of these properties. We will exemplify a predictable design flow on the power consumption property of integrated circuit designs. The power dissipation of electronic systems has become a significant optimization criterion because of heat dissipation problems, battery lifetime concerns in view of increasing computational demand for mobile multimedia devices and for reliability issues.

Power and other physical properties of a design can and should be addressed from the very beginning of a SoC design, i.e. when the algorithmic specification and system architecture are defined. In the presentation a number of optimization steps and verification techniques applicable at these early design phases will be explained. They can not only help to optimize the design, but at the same time help to achieve a predictable design flow with a significant reduction in design time and cost.


Deep Knowledge Test Generators and Functional Verification Methodolog (Abstract)
Laurent Fournier, IBM Haifa Labs

Over the last five years, we have developed in HRL three Random Test Generators focused on specific, bug-prone, areas: Floating Point, Microarchitecture Flow, and Address Translation. They were named "Deep Knowledge Test Generators (DKTGs)", as a deep understanding of the targeted space is embedded at their core. In this talk, after briefly introducing these three DKTGs, we will elaborate on their common properties and show how they integrate into and promote the functional verification process. In particular, we will demonstrate their efficiency in fostering the implementation of important concepts, such as coverage by generation and cross-product coverage models. Finally, we will discuss the relationship between DKTGs and test-plans, and we will show how the DKTGs promote the creation of more comprehensive test plans, which are the most critical determinant of the overall verification scope and quality.


Stochastic Approach to Constraint Satisfaction Problems in the Hardware Verification Domain (Abstract)
Yehuda Naveh , IBM Haifa Labs

The most powerful way of creating test benches for functional verification of complex hardware designs is by modeling the design specification and the verification tasks as a set of constraints over the test-bench parameters, and solving the resulting constraint satisfaction problem (CSP). Traditional CSP solvers rely on deterministic propagation algorithms in which each constraint reduces the domains of the parameters it sees. In my talk I will show that for some classes of hardware verification problems stochastic search algorithms are more effective, both in finding a solution, and in the quality of this solution. After discussing the unifying features of stochastic CSP algorithms, I will present a non-local-search stochastic approach which relies heavily on learning the underlying topography of the problem. At any time in this 'variable range hopping' approach search is done at all length scales of the problem. Results in creating test benches for the floating point unit of high-end processors will be briefly examined.


MSIL DSP Platform Verification Methodology (Abstract)
Amit Gur, Verification Leader of DSP platform, Motorola

The verification of the DSP platform consisted of the usage of the following verification technologies: Cycle Accurate Executable Specification, SystemC, Constraint Based Random, Functional and Code Coverage, Assertion Based Verification and Model Checking. We will show how a plan was achieved by combining these technologies together. The plan's goal was to achieve minimal pre-silicon bugs and maximum verification IP reuse. We will present precise numbers (bugs found, schedule) which will reflect the effectiveness of this plan. We will conclude with an insight to additional requirements which are not fulfilled today by industrial tools to improve this methodology.


Formal Verification or Simulation: Why not both? (Abstract)
Michel Jalfon, Analog Devices

Design verification can, in principle, be performed by either simulation or formal verification. However, presently, formal techniques are still only applicable to limited parts of the design verification process. The major limitation of traditional BDD-based engines is the number of variables that can be handled, making large designs not manageable in terms of memory.

Large designs can still be checked using a satisfiability-based (SAT) engine, which looks for bugs in a time-bounded window, but cannot verify a rule for all the reachable states. In addition, the trade-off between the number of variables of the model and the depth of the search prevents the SAT engine from finding bugs that are distant from the initial state.

We present a flow that has been developed at the Israel Design Center of Analog Devices that combines SAT-based formal verification with initial states that are extracted from simulation. The objective is to launch the SAT formal exploration from selected starting points that were identified during simulation. The initial state for the SAT run is composed of the values of all the state points from the design under test and the formal verification environment at a given time. By taking these values from simulation, we ensure that the initial state from which the bounded formal verification starts is a reachable one.

To track the changes of the environment variables and dump the initial states during simulation upon occurrence of a triggering event, we use the FoCs tool from IBM. Auxiliary signals and variables (Modeling Layer) and temporal properties that build the triggering event (Temporal and Verification Layers) are translated by FoCs into Verilog code that is embedded in the simulation.

The approach of this flow is to completely decouple the simulation and formal verification runs, making the initial states that are dumped the only connection between the two. By this means the simulation can be run at the chip level, while the formal verification can be performed at the module level.


Keynote: A Survey of Abstraction Refinement Techniques (Abstract)
Fabio Somenzi, University of Colorado at Boulder

Abstraction is fundamental in combating the state explosion problem in model checking. Automatic techniques have been developed that eliminate presumed irrelevant detail from a model and then refine the abstraction until it is accurate enough to prove the given property. This abstraction refinement approach, initially proposed by Kurshan, has received great impulse from the use of efficient satisfiability solvers in the check for the existence of error traces in the concrete model. Today it is widely applied to the verification of both hardware and software. For complex proofs, the challenge is to keep the abstract model small while carrying out most of the work on it. We review and contrast several refinement techniques that have been developed with this objective. These techniques differ in aspects that range from the choice of decision procedures for the various tasks, to the recourse to syntactic or semantic approaches (e.g., "moving fence" vs. predicate abstraction), and to the analysis of bundles of error traces rather than individual ones.


An Optimized Symbolic Bounded Model Checking Engine (Abstract)
Rachel Tzoref, IBM Haifa Labs

It has been shown that bounded model checking using a SAT solver can solve many verification problems that would cause BDD based symbolic model checking engines to explode. However, no single algorithmic solution has proven to be totally superior in resolving all types of model checking problems. We present an optimized bounded model checker based on BDDs and describe the advantages and drawbacks of this model checker as compared to BDD-based symbolic model checking and SAT-based model checking. We show that, in some cases, this engine solves verification problems that could not be solved by other methods.


Reasoning About Truncated Paths (Abstract)
Dana Fisman, Weizmann Institute of Science

We consider the problem of reasoning with linear temporal logic on truncated paths. A truncated path is a path that is finite, but not necessarily maximal. Truncated paths arise naturally in several areas, among which are incomplete verification methods (such as simulation or bounded model checking) and hardware resets. We present a formalism for reasoning about truncated paths, and analyze its characteristics.

Joint work with: C. Eisner, J. Havlicek, Y. Lustig, A. McIsaac and D. Van Campenhout


High End Router Line Card Verification (Abstract)
Rami Zemach, Routing Technology Group, Cisco System Israel

Advanced high end router line card is a fairly complicated system, composed of several large scale asics, involving hardware features as well as vast embedded software. The development of such a system implicates many challenges to the verification teams at all levels. This presentation outlines the verification strategy of a recent system developed as a collaborative project between Cisco Systems Israel (CSI) and other teams in Cisco San Jose.

The overall DV effort included architectural modeling of system features and interfaces, exhaustive simulations in block and asic levels, multi-asic simulation and post silicon verification. Advanced Hardware Verification Languages, e and Vera, were used for the RTL and GL simulation, while multi-asic simulation was carried over a distributed platform and was done using a combination of C++ and Perl. In addition SystemC was used for HW/SW co-simulation purposes.

The Israeli team was responsible for the development of two multi-million gate asics, the 1st a packet processor consisted of many 3rd party IP cores in addition to the dedicated HW, and the 2nd an algorithmic queuing buffer manager. This presentation focuses on this contribution - the challenges, the tactics, the problems and the results.


Coverage Oriented Verification of Banias (Abstract)
Alon Gluska, Intel, Haifa

The growing complexity of state-of-art microprocessors dictates the use of cost-effective verification methods. Functional coverage was widely applied in the verification of Banias, Intel's new IA-32 microprocessor designed solely for the mobile computing market. In this presentation, we describe the practical coverage approach as was carried out in the verification of Banias. According to this Coverage-Oriented verification approach, focus shifts gradually from basic logic cleanup using random testing to coverage-driven verification. As the conclusions will show, the retrospective evaluation of this approach shed light on its significant impact beyond original intentions, as well as uncovering several potential areas for refinement that will make this approach more effective on future projects.






























  About IBM  |  Privacy  |  Terms of use  |  Contact