close By using this website, you agree to the use of cookies. Detailed information on the use of cookies on this website can be obtained on OneSpin's Privacy Policy. At this point you may also object to the use of cookies and adjust the browser settings accordingly.

Uses And Limitations Of AI In Chip Design

By Ed Sperling, Semiconductor Engineering

Raik Brinkmann, president and CEO of OneSpin Solutions, sat down with Semiconductor Engineering to talk about AI changes and challenges, new opportunities for using existing technology to improve AI, and vice versa. What follows are excerpts of that conversation.

Semiconductor Engineering logo

[…]

SE: What’s changing in AI?

Brinkmann: There are a couple of big changes underway. One involves AI in functional safety, where you use context to prove the system is doing something good and that it’s not going to fail.
Basically, it’s making sure that the data you use for training represents the scenarios you need to worry about. When you have many vectors of input it’s difficult to cover all the relevant cases. People are looking into how to analyze the data itself for gaps, and for value distribution and vectors. I’ve seen some some good research about this, and some papers talking about verification and different angles to that. People are taking this seriously, and we will see a lot of interesting research as a result.

SE: What’s the second big shift?

Brinkmann: The use of AI at the edge. It’s a trend that we predicted earlier. Edge devices are capturing data at the edge. You see this with Amazon, Microsoft and others bringing edge devices into the home. That’s part of it. So is Apple’s self-driving car initiative. It’s not clear if they will have a self-driving car anytime soon, but there is serious research going on. And they only do things when they think it will work out and there’s a good chance for success.

SE: What do you see as the biggest challenges there?

Brinkmann: Who owns the data and how to secure it. Different companies are pursuing different goals. One piece of this will involve security at the device level. A second will be the security throughout the chain of data to make sure it’s not manipulated along the way. If you are pushing data from sensors into the cloud to improve machine learning or other algorithms, which you then put back onto a device, you want to make sure that data doesn’t get compromised along the way. There is research into using blockchains for that. Every device is going to add a little more to it, and you can verify the data hasn’t been compromised because it’s been distributed. At the same time, people find ways of saying, ‘Okay, this all may be true, but I’m not going to give you all the data. So I own piece of the chain of the data and you own some something else.’

SE: This starts crossing into the privacy and IP protection domains, right?

Brinkmann: Yes, because people want to retain that knowledge. There was an example I saw recently involving 3D printing, where they use a hyperledger infrastructure. Basically they want to build a system where you have someone putting a requirement out for certain components that are going to be printed in 3D, and then someone designs the component. So that’s IP that you want to protect. But in the end, you still have to send the data to the factory.

SE: What’s the solution? Partitioning and encryption?

Brinkmann: Yes, exactly. You know where the pieces are, what are the work products that everyone needs to see in this process, what are the things you can and need to protect, and what is the lifetime of the data? That’s what you can model, which is quite interesting.

SE: So how do you trace down partitioning from a formal verification perspective?

Brinkmann: Right now we are looking into hardware, and we are starting to look into firmware and other things. I’m not quite sure how this how this plays in this blockchain and connected world. For the time being, we will be focusing on the individual pieces rather than the big picture. I don’t see us verifying the whole chain of events in the system unless we use a different model — not a hardware model, but something that is capturing the whole process in some different languages that we can potentially support and use formal technology to do that.

SE: That’s an interesting challenge.

Brinkmann: When we look at safety in SoCs, which we are targeting with our tools, the first thing that happens is we break down the chip manageable parts. From there, formal will give valuable input to the whole system verification.

SE: One of the problems with those chips, particularly the ones used for safety, is that the AI running on those systems are basically opaque. Is there any progress in understanding what can go wrong with those algorithms?

Brinkmann: The machine learning guys are working on that. They want to know what this thing is doing, and they are developing some inspection and back-propagation algorithms that can be analyzed to try to understand what the system has learned. That’s something they’re trying to do, because if you don’t know exactly what decisions are based on, you can’t really get a good feeling about if it’s safe or not.

SE: That’s still not transparency, right?

Brinkmann: No, but at least they’re trying to get some visibility.

SE: Will formal will play a role here?

Brinkmann: We’ll have to see. It’s certainly possible to verify algorithms and posit things mathematically with proof-of-system systems. Doing a full formal analysis of the chain of transformations is going to be very challenging. If you’re looking at how you take a machine learning algorithm into hardware, there are multiple steps that you do that actually are not equivalent, in a sense. You’re losing information. Let’s say you have trained your algorithm on a compute farm on floating point, and now you switch to fixed point. It’s not equivalent. It’s not the same by construction. Hopefully it will give you the same response to the data in some measures, but there may be some degradation, and there will be some differences in some patterns. So it becomes a statistical expression of how equivalent these two models are.

SE: Basically what is random in this distribution, right?

Brinkmann: Right, and then it’s not the same. If you compute it with some integers, it’s different than doing it with floating point. But you still want to retain a certain amount of what you have proven to be true in the original model in the reduced one. And then you go down to hardware and say, ‘Okay, maybe I can squeeze the precision further in some areas that I’m not so interested in, or can I even go to very low precision in some places so I can map it better to hardware.’ There’s no equivalent in the formal sense. You have to redefine that concept. And only if you have done that can we automate such a process with formal analysis.

Back

Related Links