NIST report: Mitigating the risks of cyberattacks on AI systems

Cybercrime

Cyberattacks on artificial intelligence (AI) systems are increasing, so it’s important users know their vulnerabilities and try to soften the damage if they get hit, according to a new report by the National Institute of Standards and Technology (NIST).

There isn’t yet a foolproof method of preventing attacks on vulnerable AI systems, said computer scientists from NIST and other experts on adversarial machine learning.

Their 106-page report, published Jan. 4, describes common types of attacks, what types of vulnerabilities in AI systems are associated with them, and mitigations that can be employed when attacks occur. The report is intended to be paired with NIST’s AI Risk Management Framework, a document that provides guidance to AI users and developers about what voluntary steps regulators want to see in terms of cybersecurity of AI systems.

lock iconTHIS IS MEMBERS-ONLY CONTENT. To continue reading, choose one of the options below.