close By using this website, you agree to the use of cookies. Detailed information on the use of cookies on this website can be obtained on OneSpin's Privacy Policy. At this point you may also object to the use of cookies and adjust the browser settings accordingly.

In the news

Which Verification Engine?

Semiconductor Engineering logo

By Ed Sperling, Semiconductor Engineering

Experts at the Table, part 2: The real value of multiple verification engines; cloud-based verification gains some footing, particularly with internal clouds.

The cloud is a great approach. We had a cloud solution early on and it didn’t go anywhere. There were two issues we saw. One is the legal IP issues. Companies don’t want to send their IP to a cloud. Even if the engineers do, they don’t want to go talk to the company’s lawyers to sell them. However, companies have their own clouds now, which is very easy to do. You can have emulators in huge rooms with fans and that solves that problem. The big problem is the business model. Cloud provides a pay-per-use model, which is very effective for verification. You can get some core verification-based licenses, and then use a pay-per-use for a bulge when you need the extra runs. We’ve employed that quite successfully recently. On the big data side, if you’re dealing with engines running very quickly and trying to speed them up, or if you’re dealing with a massive amount of data coming out those engines, it’s still the same issue. You have to find smart ways of dealing with verification collaboratively between the customers and the folks building platforms. If you look at Portable Stimulus or formal, you can think of these as smart verification platforms that can be configured to solve problems, whether they’re working with post-process data or processing with the engines directly. It doesn’t matter.

Read more

Doc Formal: the crisis of confidence facing verification III

Tech Design Forum logo

By Dr. Ashish Darbari, Tech Design Forum

In the first two parts of this series, I described the verification crisis, explained how it came about, and began to describe the pillars and, within them, components of a responsive design verification (DV) flow.

Part One defined a verification meta model as a way to describe the key aspects of the crisis and laid out high level ideas for alleviating it.

Part Two considered two of the four main pillars of an optimized DV flow: ‘Requirements and Specifications’ and ‘Verification Strategy and Plan’.

Read more

The Week In Review: Design

Semiconductor Engineering logo

OneSpin returns with another holiday puzzle, this year challenging people to use formal tools to solve what may be the world’s hardest Sudoku grid. The deadline is Jan. 7th.

Read more

Big Challenges, Changes For Debug

Semiconductor Engineering logo

By Ann Steffora Mutschler, Semiconductor Engineering

Indeed, as silicon geometries continue to shrink, SoC platforms on single devices become larger and more complex, reminded Dave Kelf, vice president of marketing at OneSpin Solutions. “The debug complexity of these devices increases at an exponential rate to design size. Furthermore, the error conditions that can occur may be due to complex corner case problems, which are hard to track down. Innovative debug techniques are required, and these might make use of unexpected alliances between different tools. For example, a fault that becomes apparent during SoC emulation, can be debugged using bug hunting techniques applied with a formal tool, with assertions being created that exhaustively analyzes the specific condition. The continued shrinkage of geometries essentially results in inventive and diverse combinations of tools, stretching their capabilities to unexpected requirements.”

Read more

Prototyping Partitioning Problems

Semiconductor Engineering logo

By Ann Steffora Mutschler, Semiconductor Engineering

That creates problems when those different pieces are put back together. “Designs have to be broken into FPGA-sized pieces, and then connected together, but those connections run at a much slower rate than inside of the FPGA,” said Dave Kelf, vice president of marketing at OneSpin Solutions. “There are tools that try to automatically do this partitioning. Other engineering groups do this manually, which means they are looking at their design and breaking it up themselves, which is a real pain. It’s very hard to do and leads to all kinds of trouble. Usually it means the rapid prototype’s design isn’t like the final design is going to be, which creates all sorts of other issues.”

Equivalency checking can help here. “If you’re dealing with a synthesis tool that does aggressive optimizations, the more aggressive they are, the more likely it might break the design,” Kelf said. “Equivalency checking can make sure the design is still the same as it was from the RTL to the gates after the optimizations. If you have that equivalency checking and it will point out where the errors are, then it allows you to use all of the optimizations while safely knowing the design functionality is preserved. Therefore, the optimizations can be switched on. Almost certainly there will be errors in there that can be quickly diagnosed, tweaked and figured out.”

Read more

The Uncontrolled Rise Of Functional Safety Standards

Semiconductor Engineering logo

By Sergio Marchese, Semiconductor Engineering

Over the past 30 years, advances in software and hardware have made it possible to create sophisticated systems controlling crucial aspects of complex equipment, from rolling and pitching in aircrafts, to steering and braking in cars. The processes and methods defined in functional safety standards are crucial to ensure that these systems behave as expected and safely, even when certain parts –– such as a microprocessor or other hardware component ––malfunction. Standards often require strict processes to identify potential hazards on the final product, assess the associated risk, mitigate it with appropriate safety measures and provide evidence that the residual risk is acceptable.

Read more

Doc Formal: The crisis of confidence facing verification II

Tech Design Forum logo

By Ashish Darbari, Tech Design Forum

In the first part of this series, I described the need for good verification meta models and the qualities that define them. Those are the right combination of verification technologies and methodologies deployed in the correct order, by a trained workforce of skillful design verification (DV) engineers, with the goal of minimizing risk by hunting down bugs before they reach silicon.

The key requirement within the meta model is that it outlines a methodology that describes an efficient DV flow to catch bugs as early as possible (shift-left) and gives the user certainty that desired quality levels will be met on schedule. In simple terms: Quality must not compromise productivity, and equally, productivity must not compromise quality.

Read more

Which Verification Engine? | Experts at the Table, Part 1

By Ed Sperling, Semiconductor Engineering

No single tool does everything, but do all verification tools have to come from the same vendor?

Semiconductor Engineering sat down to discuss the state of verification with Jean-Marie Brunet, senior director of marketing for emulation at Mentor, a Siemens Business; Frank Schirrmeister, senior group director for product management at Cadence; Dave Kelf, vice president of marketing at OneSpin Solutions; Adnan Hamid, CEO of Breker Verification Systems; and Sundari Mitra, CEO of NetSpeed Systems. What follows are excerpts of that conversation.

SE: What’s changing in verification?

[...]

Kelf: We have a long way to go. One of the things we see is that on one hand, companies are traveling down the performance road so we can do more simulation and run more tests and do more things. But there’s a definite shift toward being smarter about verification. How do you take a formal engine and figure out more ingenious ways to try out all of these different states in cache coherency, for example. How can we apply that engine and rely less just on the speed of simulation or emulation? The only way to get through some of the verification challenges, which is going to be the equivalent of 10X to 100X performance over the next few years as we go to autonomous vehicles and machine learning. If you look at cache coherency, that’s nothing compared to the dynamic complexity of machine learning. Some of the core tools are there in verification, but we still have a long way to go. How do you take a formal engine and set it up so that it can solve some of these much bigger problems? We need to pull those together, and there are people working on that. Portable Stimulus is addressing how we use all of these different engines and apply more complex test scenarios and test patterns to the engines in the way that those engines consume them the best. If we can solve that problem, connecting these engines makes much more sense. To do that, verification also will have to be much more collaborative between the vendors and the users.

 

Read more

Press Contact

Michelle Clancy
» send an e-mail
» +1 503-702-4732