DVCon US 2022
February 28 - March 3, 2022 | Virtual
Join OneSpin: A Siemens Business in the following session:
Tuesday, March 1 | 15:00 - 17:00
Wednesday, March 2 | 10:00 - 12:00
How to Avoid the Pitfalls of Mixing Formal and Simulation Coverage (Oral/Lecture)
Driven by the need to objectively measure the progress of their verification efforts, and the relative contributions of different verification techniques, customers have adopted “coverage” as a metric. However, what exactly is being measured is different depending on underlying verification technology in use. Consequently, simple merging coverage measurements from different sources — in particular, blindly combining functional coverage from constrainedrandom simulations and coverage from formal analysis — can fool the end-user into believing that they have made more progress, and/or they have observed behaviors of importance, when neither is the case. In this paper we will first review what these forms of “coverage” are telling the user, and how to merge them together in a manner that accurately reports status and expected behaviors.
Thursday, March 3 | 9:00 - 11:00
The Best Verification Strategy You’ve Never Heard Of (Tutorial)
The latest data from the bi-annual Wilson Research Group Functional Verification survey show that, despite more than a decade of effort in establishing new verification methodologies and techniques, the problem of how to produce functionally-correct ASIC or FPGA based electronic components is still a challenge. The median project schedule for both ASIC and FPGA projects is 10-12 months, which isn’t really that much time. Digging a little deeper, however, we see that over two thirds of the projects surveyed, both ASIC and FPGA, are completed behind schedule. A cynical observer might conclude that this also shows that one third of the respondents are lying, but even if that is not the case, clearly there is a problem. Looking at it another way, when we consider that only one third of ASIC projects achieve first-pass success while 83% of FPGAs have non-trivial bugs escape into production, it’s not really surprising that so many projects are behind schedule. The question then becomes, is the way to hit schedules to just build in respins to account for bugs getting through, and is it okay to live with the resulting longer schedules? Or is there a way to shrink instead of pad the schedule? And if we haven’t been able to do that in over a decade of focusing almost exclusively on improving verification, what is there to do? Of course, verification methodology hasn’t remained static over the years, but then, neither have designs. The problem is that, as design complexity grows according to Moore’s Law, verification complexity grows at a substantially greater rate. The Wilson Research Study shows that, since 2007, the mean peak number of design engineers working on a project has increased by 32% while the mean peak number of verification engineers has increased 143%. Clearly this is unsustainable. So, what is to be done? And when the huge amount of software functionality, that must also be verified, is taken into account, is there any hope at all? This tutorial will approach the question of design quality from a unique perspective. Instead of trying to verify the bugs out, what if we could avoid putting them in in the first place? The first question to answer is: Who is ultimately responsible for functional quality? To answer this question, we will explore the design and verification landscape, including the sometimes competing but hopefully complementary roles of design and verification teams throughout the process. Since there is so much software content in designs today, we must have a way of verifying a system pre-silicon to make sure it will work. But the software integration phase is not the place to be trying to debug hardware failures. Rather, we need a way to ensure bug-free hardware well before the integration phase. We will explore two approaches to minimizing hardware bugs. The first will be to apply various static and formal analysis techniques to the design to eliminate bugs before simulation even starts. We will walk through several approaches to bug avoidance, such as linting, automatic formal applications, and other static analysis techniques and see how this pro-active strategy is a clear win compared to bug detection and correction as you may be used to. Secondly, it is well known that, regardless of the programming language used, the number of bugs is directly proportional to the number of lines of code being written. Therefore, by designing and verifying at a higher level of abstraction using fewer lines of code, we can minimize the number of bugs introduced. We will step through how a design and verification flow using C++ and high-level synthesis can deliver reduced verification time and quantify the improved design quality. Additionally, once the abstract HLS design is verified, an automated RTL generation process and verification re-use methodology delivers RTL that is correct-by-construction, thus avoiding any additional bugs as we move towards integration. Once the various blocks of the system are ready for integration, we need a platform that will allow us the capacity to execute what could be a huge design, and as importantly, the speed to run near-production software while providing the visibility to ensure that our goals are being met. We will see how a unified hardware-assisted verification system, that can take you all the way from hybrid virtual platforms to emulation and FPGA-based prototyping, can achieve your quality goals in one seamless environment.