In the news

Speeding Up Neural Networks

By Ed Sperling, Semiconductor Engineering

Neural networking is gaining traction as the best way of collecting and moving critical data from the physical world and processing it in the digital world. Now the question is how to speed up this whole process.

But it isn’t a straightforward engineering challenge. Neural networking itself is in a state of almost constant flux and development, which makes it a moving target. There are more than 20 different types of neural networks today, and some are more in favor one month than the next. In addition, there isn’t a clear answer for what is the best type of processor to use. The commonly accepted metrics—work done per unit of energy, per millisecond, and for the lowest possible cost—still apply, but they can be weighted differently at different times in the development cycle.


What’s different about neural networking is that these networks can be trained to be more efficient, a pattern that follows development of the human brain. An infant has more neurons than an adult—a process known as synaptic pruning—and a successfully designed neural network should be become more efficient or capable over time.

“Networks train image processing and language processing,” said Raik Brinkmann, CEO of OneSpin Solutions. “Deep neural networks consist of several layers of networks. There is a race on for this technology, using multi-dimensional constructs.”

Brinkmann noted that the big problem is still the volume of data. “You want to go from a von Neumann to a data flow architecture. But what is the right architecture?”

So far that isn’t clear, and it probably won’t be for some time. No matter how far scientists and engineers have come with neural networking, and its application to machine learning and artificial intelligence, there are many years of work ahead.

Read more

Design Complexity Drives New Automation

By Ann Steffora Mutschler, Semiconductor Engineering

It now takes an entire ecosystem to build a chip—and lots of expensive tools.

As design complexity grows, so does the need for every piece in the design flow—hardware, software, IP, as well as the ecosystem — to be tied together more closely.


Dave Kelf, vice president of marketing at OneSpin Solutions, agreed that one of the most dramatic changes in the development flow relates to verification techniques. “Simulation has given away to the three-legged stool of simulation, emulation and formal verification, each with its own attributes and issues. Tying these technologies into one common methodology is complex to say in the least. Common coverage methods provide a cornerstone for evaluating progress across the three solutions, and indeed the Accellera UCIS (Unified Coverage Interoperability Standard) Working Group jumped on this idea to extend coverage cross platforms and vendors.”

Read more

Whatever Happened To High-Level Synthesis? | Experts at the Table, Part 1

By Brian Bailey, Semiconductor Engineering

What progress has been made in High Level Synthesis and what can we expect in the near future?

A few years ago, High Level Synthesis (HLS) was probably the most talked about emerging technology. It was to be the heart of a new Electronic System Level (ESL) flow. Today, we hear much less about the progress being made in this area.

Semiconductor Engineering sat down to discuss this with Bryan Bowyer, director of engineering for high level design and verification at Mentor, a Siemens Business; Dave Kelf, vice president of marketing for OneSpin Solutions; and Dave Pursley, product manager for HLS at Cadence. What follows are excerpts from the conversation.

Read more

The Great Machine Learning Race

By Ed Sperling, Semiconductor Engineering

Processor makers, tools vendors, and packaging houses are racing to position themselves for a role in machine learning, despite the fact that no one is quite sure which architecture is best for this technology or what ultimately will be successful.

[...] The new wrinkle is that there is more data to process, and movement across skinny wires that are subject to RC delay can affect both performance and power.

“There is a multidimensional constraint to moving data,” said Raik Brinkmann, CEO of OneSpin Solutions. “In addition, power is dominated by data movement. So you need to localize processing, which is why there are DSP blocks in FPGAs today.”

This gets even more complicated with deep neural networks (DNNs) because there are multiple layers of networks, Brinkmann said.

Read more

Podcast: OneSpin's Dave Kelf on Lauro Rizzatti's "Verification Perspectives"

Lauro Rizzatti welcomes Dave Kelf, OneSpin's VP of marketing, as a guest on his new podcast. Lauro picks Dave's brain about the state of electronic design verification and analysis. Topics covered in their discussion include:

- Dave's experience with electronic design analysis tools

- Factors driving the evolution of design verification

- Trends in verification today

- Possibilities (and challenges) of integrating formal and emulation

- Formal for bug hunting

- Role of mobile and automotive in driving verification evolution

- Dave's predictions for the future of verification

Read more

Challenges Grow For IP Reuse

By Ann Steffora Mutschler, Semiconductor Engineering

Methodologies for integration become a competitive tool as complexity and possible options skyrocket.

As chip complexity increases, so does the complexity of IP blocks being developed for those designs. That is making it much more difficult to re-use IP from one design to the next, or even to integrate new IP into an SoC.

What is changing is the perception that standard IP works the same in every design. Moreover, well-developed methodologies for reuse can give a chipmaker a competitive advantage. The final shape of the design depends on various factors, such as application demand, and interfacing or power requirements, all of which increase the number of possible configurations.


And that’s not easy because once the IP is changed from one product to another, the register map goes out the window. In order to maintain the register map, the IP must be managed intelligently.

Dave Kelf, vice president of marketing at OneSpin, said that in some cases, management is happening in much the same way that software engineers manage software blocks using a repository with version control and multitasking. “With globalization, more engineers are getting at the IP within an organization, so the repository has to be available for more than one team working on an IP block. That’s a big issue.”

As far as how IP reuse evolves, time will tell but given things like safety and security being built in where it wasn’t before, along with IP blocks getting bigger and bigger calling into question the very definition of an IP block, one can imagine a hierarchy of IP, he said.

Read more

Quality Issues Widen

By Ed Sperling, Semiconductor Engineering

Rising complexity, diverging market needs and time-to-market pressures are forcing companies to rethink how they deal with defects.

“Die size is increasing while feature structure size and voltage levels are decreasing,” said Raik Brinkmann, CEO of OneSpin Solutions. “So you need less energy to create an issue. That requires more error correction and TM (triple modular) redundancy. But it also makes it harder for design and verification.”

Brinkmann noted that the situation is somewhat relieved by machine learning because noise is part of the algorithm and robustness is built in, so certain type of faults would be just noise and cause no harm.

Read more

Press Contact

Nanette Collins
» +1 617 437 1822