Cyberattacks on artificial intelligence (AI) systems are increasing, so it’s important users know their vulnerabilities and try to soften the damage if they get hit, according to a new report by the National Institute of Standards and Technology (NIST).

There isn’t yet a foolproof method of preventing attacks on vulnerable AI systems, said computer scientists from NIST and other experts on adversarial machine learning.

Adrianne Appel writes regulatory news, policy, and trends for Compliance Week. She previously reported about policy developments for Bloomberg Law and Bloomberg Government. Email: adrianne.appel@complianceweek.com LinkedIn:...