Cyberattacks on artificial intelligence (AI) systems are increasing, so it’s important users know their vulnerabilities and try to soften the damage if they get hit, according to a new report by the National Institute of Standards and Technology (NIST).
There isn’t yet a foolproof method of preventing attacks on vulnerable AI systems, said computer scientists from NIST and other experts on adversarial machine learning.

