close By using this website, you agree to the use of cookies. Detailed information on the use of cookies on this website can be obtained on OneSpin's Privacy Policy. At this point you may also object to the use of cookies and adjust the browser settings accordingly.

Putting Limits On What AI Systems Can Do

By Ed Sperling

Developing these systems is just part of the challenge. Making sure they only do what they’re supposed to do may be even harder.

New techniques and approaches are starting to be applied to AI and machine learning to ensure they function within acceptable parameters, only doing what they’re supposed to do.


“The best way to control AI is to have a second system in place that acts like a safety control mechanism, like we do in hardware or software for functional safety today,” said Raik Brinkmann, CEO of OneSpin Solutions. “What are the things you want to protect against? What are the things you don’t want AI to do? What are the bad situations you want to catch? You want to try to mitigate those risks. You cannot fully control AI, because it’s too complex, but you can mitigate risks. And if we could come up with some methodologies and standards to address this, it would be helpful. That could include prepared scenarios that everyone would want to check for.”


Related Links