IBM Israel
Skip to main content
 
Search IBM Research
   Home  |  Products & services  |  Support & downloads  |  My account
Select a Country Select a country
IBM Research Home IBM Research Home
IBM Haifa Labs Homepage IBM Haifa Labs Home
IBM Haifa Labs Homepage IBM Haifa Labs Leadership Seminars

Verification Seminar 2004

Introduction
Program
Abstracts
Press Coverage
Photo Gallery
Visitors Information
Verification in HRL
Feedback


Abstracts
Formal Verification of Synchronizers in GALS SoCs (Abstract)
Ran Ginosar, Head, VLSI Systems Research Center, Electrical Engineering Department, Technion - Israel Institute of Technology

GALS SoCs require synchronization of control and data transfers between different clock domains. Typical synchronization methods and circuits are error-prone [1]. Neither circuit nor logic simulations can assure the correctness of synchronizers, as they are not designed to handle multiple clocks operating at arbitrary relative frequency and phase. It seems that only correctness-by-construction and formal verification may address this issue successfully. Correct-by-construction of synchronizers is not always feasible, because synchronizers always consist of at least two parts, one in each of the two communicating modules, and often the two modules originate for different sources-different IP core vendors, different libraries, or different design teams. In general, when two modules are connected with a synchronizer, formal verification may be our only avenue to assuring correct operation.

Inter-clock domain communication that requires synchronization can be classified into a few classes, according to the relationship between the two clocks. Two domains are relatively synchronous if their respective clocks operate at the same frequency and there is no phase difference between the clocks. They are considered mesochronous if the clocks have the same frequency but there is a fixed relative phase difference, and multi-synchronous if the phase difference drifts over time (namely, it changes slowly). The two domains are periodic if the two frequencies are different but fixed, and asynchronous if we know nothing about the relationship of the two clocks. We focus in the paper on asynchronous clock domains, and apply design methodologies that have originally been developed for asynchronous circuits.

In this research we employ Model Checking for verifying synchronizers. Verification is divided into two verification stages, structural and functional. In the structural verification stage, the HDL is analyzed to identify crossings of signals from one clock domain to another, synchronization circuits are detected (e.g. when two flip-flops are found on an asynchronous input), and enabled data transfers are found (where a synchronized signal is used to enable the loading of asynchronous inputs). Each identified synchronizer can be mapped to a pre-defined synchronization template, thus aiding in subsequent functional verification. Structural verification of synchronizers has already been developed by some commercial EDA tools (e.g. [2]).

This research focuses on functional verification. We have investigated a number of approaches: Single domain verification, multi-domain verification, and data verification.

In single-domain verification, the synchronizer is decomposed into two halves, one in each clock domain, by means of the structural verification. The interconnect of the two halves, as well as the synchronizing flip-flops, are verified in the structural domain. Functional verification in performed separately on each of the two halves-each functional verification is conducted inside a single domain. A synchronizer template is identified, and the corresponding STG that specifies the template is employed for verification. PSL assertions [3] are derived for the STG, which verify correct operation of all parts of the STG. The assertions are then verified using a model checker (we used @HDL @Verifier [2] and IBM RuleBase [4]). Common errors are demonstrated and verified.

In multi-domain verification, two clocks are modeled by non-deterministic methods that verify all potential traces of two interleaved clocks. The entire synchronizer, which spans two clock domains, is verified in a single operation. This verification may or may not attempt to match the synchronizer with a template synchronizer.

In data verification, actual data transfers from one domain to another, rather than the control handshake protocol, are verified. Non-determinism is employed to cover all possible data traces in the sender and to verify that they are received correctly by the receiver, without any misses or duplicates.

References

  1. R. Ginosar, "Fourteen Ways to Fool Your Synchronizer," IEEE Int. Symp. on Asynchronous Circuits and Systems (ASYNC), 2003.
  2. @HDL, Inc., "@Designer and @Verifier," http://www.athdl.com/products.html
  3. Accellera Formal Verification Technical Committee, "Property Specification Language Reference Manual," Ver. 1.01, April 2003, http://www.eda.org/vfv/
  4. Shoham Ben-David, Cindy Eisner, Daniel Geist, Yaron Wolfsthal, "Model Checking at IBM," Formal Methods in System Design 22(2), 2003.


A Massively Parallel Platform for Formal Verification: RuleBase Parallel Edition (Abstract)
Rachel Tzoref, IBM Haifa Labs

An inherent problem in formal verification is the so-called "state explosion" problem, which prevents formal methodologies from scaling-up to the chip level, or even to the unit level. Many techniques have been suggested to extend the scope of formal verification, including the use of distributed computing.

RuleBase PE is a new formal verification tool, which allows users to overcome space and time limitations, by distributing the verification tasks over a large number of machines.

RuleBase PE offers two levels of parallelism: course-grain and fine-grain. Coarse-grain parallelism means running a few verification engines simultaneously. The engines may be based on different strategies (e.g., SAT-based vs. BDD-based) or may just be configured differently. Either way, the first engine to reach an answer will stop the search of the other engines. Because it is usually difficult to tell which will be the fastest technique a priori, and the runtime difference between different techniques may be dramatic, executing different techniques in parallel can save a lot of time for the verification engineer, as well as CPU time. In addition, engines may share information about the verification problem and help each other get answers even faster. Fine-grain parallelism means breaking the algorithm of a verification engine into smaller independent tasks and executing them in parallel. RuleBase PE offers a few such distributed algorithms, where additional computing power allows RuleBase to run the algorithm in a more efficient manner.


SystemVerilog: Introduction and a User Perspective (Abstract)
Johny Srouji, Engineering Manager, Intel CAD Division, Haifa

This presentation provides an overview of SystemVerilog, both from the language semantics and capabilities as well as from the user perspective. SystemVerilog is a Register Transfer Level (RTL) language that enhances Verilog to enable higher level of abstraction for modeling and verification, and adds extensive capabilities to design modeling, verification, assertion mechanism, and Application Programming Interface (API) functions to assertions, coverage, and C.

The presentation will briefly introduce SystemVerilog history and status in terms of its standardization under IEEE and describes the main objectives and challenges to be resolved. This will be followed by presenting some of the major language constructs and capabilities in design modeling, verification, and assertions aspects. The presentation will be concluded by providing a user perspective of the language that is based on actual usage on a microprocessor design at Intel. This section will include a description of some of the language extensions in terms of their impact on abstracting a concise description of the design.


State of the Technology Industry in Israel... and the Future (Abstract)
Orna Berry, Venture Partner in Gemini Israel Funds and Former Chief Scientist of the Israeli Ministry of Industry and Trade

Israel's Information and Communications technology industry is large relative to other countries, it accounted for about 18.4 percent of Israel's business-sector product in 2003, compared with an average of less than 10 percent in the EU and approximately 11 percent in the US. Dr. Berry's discussion will focus on historical preconditions for this figure, as well as Israel's current initiatives to maintain and increase commitments to the ICT industry. In addition, the talk will focus on emerging trends in the Israeli technology space, such as collaboration with developing markets, and the applicability of military-inspired technology for peaceful uses, especially in the medical arena.


EDA Standards: Motivation, Players, Challenges, and Achievements (Abstract)
Dennis Brophy, Chair, Accellera Standards Organization and Director of Strategic Business Development, Model Technology

Standards play a key role to expand markets and reduce new technology adoption risks for producers and consumers. Accellera, a standards organization formed to drive the development and use of language-based design automation standards that benefit electronics systems companies, semiconductor suppliers and electronic design automation companies, has a proven process to create standards with cross industry collaboration. This presentation will focus on the benefit of standards, how they relate to and contrast private industry market development. A status of recent work and current projects will be offered and detailed collaboration with the IEEE will be explored.


Piparazzi: A Micro-architecture Approach to Functional & Performance Verification in Processors (Abstract)
Eyal Bin, IBM Haifa Labs

Functional verification is one of the major tasks in hardware design. It is typically performed at several abstraction levels, ranging from the highest architecture level down to the lowest RT level. This presentation concentrates on the mid-range level: the micro-architecture. At this level, the search for bugs in microprocessors is composed of two complementary phases. The first phase includes the generation of tests that will bring the microprocessor to various micro-architectural design corners. The second one consists of the comparison between the expected behavior and the actual simulated one. The Piparazzi technology is responsible for both the generation and the comparison phases. For the latter phase, the comparison is done at two levels: the architecture and the micro-architecture levels. Architecture comparison is done by comparing architecture resources, such as memory and register contents. Micro-architecture comparison focuses on checking that the various micro-architecture mechanisms function correctly, and therefore testing the cycle accuracy of the design.

A bug uncovered by using micro-architecture comparison on a specific test is either pure performance or architectural. A pure performance bug is one that slows down or speeds up the microprocessor by one or more cycles, but can not cause an illegal architectural state. An architectural bug is one for which there is a test that causes an illegal architectural state. It should be observed that such architectural bugs are not directly revealed by the test (otherwise, they would have been caught by Piparazzi's architectural comparison), but it is clear that there exist some other tests in which the bug will cause an architectural mismatch. Micro-architectural (timing) comparison is important because architectural comparison can not reveal performance bugs. Moreover, in many cases it is difficult to reveal the architectural aspects of bugs, while the micro-architecture aspects occur more frequently.

Piparazzi technology consists of three major parts: a Generic Test Plan (GTP), a coverage driven generator of tests, and a comparator that compares the actual micro-architecture behavior with the expected one. The GTP is a set of coverage models, where each one is a cross-product of attributes representing interesting events for a specific internal mechanism of the microprocessor. These models are processor independent. Piparazzi technology has models for the different sub-units and mechanisms of microprocessors, including pipelines, stalls, forwards, flushes, etc. These coverage models serve as input to the second part of Piparazzi-the generator.

The generator receives coverage models as input and generates a random architectural test for each (legal) event. This generator is model-based and therefore its engine is processor-independent. The model of the processor used by the generator defines a cycle accurate behavior of the specific microprocessor and it is written using a dedicated language both at the architecture level and micro-architecture level. The generator uses this model to produce the actual test by translating the micro-architecture requests coming from the GTP to a random architectural test. This test, when simulated, brings the design to the requested internal state.

In addition, Piparazzi generates an expected architecture and micro-architecture behavior for the specific microprocessor and the specific test. The expected architectural results are used by the simulation environment to verify the architectural facilities.

The third and the last part of Piparazzi is the comparator, which compares between the expected micro-architecture behavior and the actual simulated behavior.

Using this technology, we search for bugs in main-stream high-end IBM microprocessor designs. So far we have found several dozens bugs. Most of these bugs were found via micro-architecture comparison. In these cases, the bugs did not cause any discrepancy in the architectural facilities. Most of these bugs are pure performance bugs, but some of them are architectural in the sense that was mentioned above.


Keynote: Abstraction Refinement in Model Checking with application to C and Verilog (Abstract)
Edmund M. Clarke, Department of Computer Science, Carnegie Mellon University

Model checking techniques applied to large industrial circuits suffer from the state space explosion problem. The most important technique for addressing this problem is abstraction. A special form of abstraction, called predicate abstraction, has been applied successfully to large software programs, it is rarely used for hardware. This talk adapts software model checking techniques for the verification of word-level models of circuits. Furthermore, the performance of predicate abstraction of circuits is improved as follows: 1) We partition the abstraction problem by forming subsets of the predicates. The resulting abstractions are more coarse, but the computation of the abstract transition relation becomes faster. We identify lazy abstraction as a special case of the predicate partitioning. 2) We use weakest preconditions of circuit transitions in order to obtain new predicates during abstraction refinement. These techniques have not been applied to circuits before. We provide experimental results on publicly available benchmarks to demonstrate the power of this approach.


Debugging complex FPGA platforms (Abstract)
Ivo Bolsens, Vice President and Chief Technology Officer, Xilinx

Process technology and architecture innovation are the two engines that have fueled a spectacular advancement in FPGAs over the past 10 years During this period, the price of FPGAs has been reduced by two orders of magnitude, the logic capacity of FPGAs has been increased by two orders of magnitude and the performance has been increased by one order of magnitude. Whereas ASICs buck the tide of processing technology, FPGAs ride the tide. Deep submicron effects are breaking down the traditional modular design flow of traditional SOC architectures.

A growing portion of the design time is spent on dealing with deep submicron effects, at the expense of the creative process of design authoring. Surveys show that, today, less than 20% of the design time, for complex SOCs, is spent on design authoring.

Programmable FPGA platforms give designers the benefits of deep submicron but rather than focusing on getting the silicon to work, you can focus on getting the design to work.

The mainstream use of multi-million gate FPGA devices, however, poses challenges for verification methodologies that rely on externally probing package pins or board traces. The use of FPGA logic that sources and captures data (functioning as a pattern generator/logic analyzer) allows FPGA users to easily debug multimillion gate designs by eliminating the need for complex external test fixtures. Analyser cores are integrated into the FPGA to provide real-time debug and verification capabilities via a standard JTAG port. This approach allows for complex logic verification and debugging at full systems speeds.






























  About IBM  |  Privacy  |  Terms of use  |  Contact