Cybersecurity

How To Improve Cybersecurity for Artificial Intelligence

This policy brief explores the key issues in attempting to improve cybersecurity and safety for artificial intelligence as well as roles for policymakers in helping address these challenges.

cybersecurity.jpg

In January 2017, a group of artificial-intelligence researchers gathered at the Asilomar Conference Grounds in California and developed 23 principles for artificial intelligence (AI), which was later dubbed the Asilomar AI Principles. The sixth principle states that “AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.” Thousands of people in both academia and the private sector have since signed on to these principles, but, more than 3 years after the Asilomar conference, many questions remain about what it means to make AI systems safe and secure. Verifying these features in the context of a rapidly developing field and highly complicated deployments in health care, financial trading, transportation, and translation, among others, complicates this endeavor.

Much of the discussion to date has centered on how beneficial machine learning algorithms may be for identifying and defending against computer-based vulnerabilities and threats by automating the detection of and response to attempted attacks. Conversely, concerns have been raised that using AI for offensive purposes may make cyberattacks increasingly difficult to block or defend against by enabling rapid adaptation of malware to adjust to restrictions imposed by countermeasures and security controls. These are also the contexts in which many policymakers most often think about the security impacts of AI. For instance, a 2020 report on “Artificial Intelligence and UK National Security” commissioned by the UK’s Government Communications Headquarters highlighted the need for the United Kingdom to incorporate AI into its cyberdefenses to “proactively detect and mitigate threats” that “require a speed of response far greater than human decision-making allows.”

A related but distinct set of issues deals with the question of how AI systems can themselves be secured, not just about how they can be used to augment the security of our data and computer networks. The push to implement AI security solutions to respond to rapidly evolving threats makes the need to secure AI itself even more pressing; if we rely on machine-learning algorithms to detect and respond to cyberattacks, it is all the more important that those algorithms be protected from interference, compromise, or misuse. Increasing dependence on AI for critical functions and services will not only create greater incentives for attackers to target those algorithms but also the potential for each successful attack to have more severe consequences.

This policy brief explores the key issues in attempting to improve cybersecurity and safety for artificial intelligence as well as roles for policymakers in helping address these challenges. Congress has already indicated its interest in cybersecurity legislation targeting certain types of technology, including the Internet of things and voting systems. As AI becomes a more important and widely used technology across many sectors, policymakers will find it increasingly necessary to consider the intersection of cybersecurity with AI. This paper describes some of the issues that arise in this area, including the compromise of AI decision-making systems for malicious purposes, the potential for adversaries to access confidential AI training data or models, and policy proposals aimed at addressing these concerns.

Read the full story here.