Cybersecurity

Balancing Risk and Reward: AI Risk Tolerance in Cybersecurity


This article is part of a series of written products inspired by discussions from the R Street Institute’s Cybersecurity-Artificial Intelligence Working Group sessions. Visit the group’s webpage for additional insights and perspectives from this series.

The rapid advancement of artificial intelligence (AI) underscores the need for a nuanced governance framework that actively engages stakeholders in defining, assessing, and managing AI risks. A comprehensive understanding of risk tolerance—which involves delineating the risks deemed acceptable in the pursuit of harnessing AI’s benefits, identifying the entities responsible for defining these risks, and clarifying the processes by which risks can be assessed and subsequently accepted or mitigated—is essential.

The exercise of assessing risk tolerance also creates the necessary space for stakeholders to question and assess the extent to which regulatory interventions are needed over less restrictive, alternative, and supplementary solutions like issuing recommendations, sharing guidance for best practices, and launching awareness campaigns. The clarity gained through this exercise also sets the stage for our assessment of three risk-based approaches to AI in cybersecurity: implementing risk-based AI frameworks; creating safeguards in AI design, development, and deployment; and advancing AI accountability by updating legal standards.

1. Implementing Risk-Based AI Frameworks

Risk-based cybersecurity frameworks provide a structured and systematic approach for organizations to identify, assess, and manage the evolving risks associated with AI systems, models, and data. The National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI RMF) is one notable example of a risk-based AI framework that builds upon established cyber and privacy frameworks to aid organizations in the responsible design, development, deployment, and use of AI systems. By outlining how AI risks differ from traditional software risks, such as the scale and complexity of AI systems, the NIST AI RMF helps organizations prepare for and navigate the evolving cybersecurity landscape with greater confidence, coordination, and precision. The voluntary nature of the NIST AI RMF also affords organizations the flexibility to tailor the framework to their specific needs and risk profiles. Congress has already taken steps to integrate the NIST AI RMF into federal agencies and AI technology procurement through its bipartisan, bicameral introduction of the Federal Artificial Intelligence Risk Management Act.

The NIST AI RMF is specifically designed for agility, which is essential for keeping pace with technological innovations and ensuring that safety and security protocols evolve in tandem with AI’s expanding role. To supplement the NIST AI RMF’s efforts, the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence underscores the importance of continuous improvement and adaptation in AI governance by extending its reach and robustness. Initiatives like the newly established U.S. AI Safety Institute and the AI Safety Institute Consortium are instrumental in expanding upon the NIST AI RMF’s core focus by driving the framework’s capacity for addressing safety and security challenges within the AI domain. Fostering collaboration and innovation, they exemplify the proactive steps taken to ensure the NIST AI RMF remains responsive to AI’s dynamic nature and implications.

2. Creating Safeguards in AI Development and Deployment

Safeguards ensure that AI systems operate within defined ethical, safety, and security boundaries. Some AI companies have already voluntarily committed to incorporating safeguards like rigorous internal and external security testing procedures before public release. This strategy is vital for maintaining user trust and ensuring responsible deployment and use of AI technologies.

However, acquiring the resources needed to implement these safeguards can be challenging for some organizations. Creating and implementing safeguards throughout AI development and deployment may also cause delays in achieving key innovation milestones. Furthermore, the risk of safeguards being bypassed or removed highlights a significant challenge in ensuring these protective measures are effective and enduring. These challenges require a mixture of safeguarding strategies to be leveraged and continuously evaluated and adapted to keep pace with the evolving AI technology landscape. Incorporating traditional cybersecurity principles like security-by-design and -default into AI systems can also enhance the efficacy of safeguarding strategies.

3. Advancing AI Accountability by Updating Legal Standards

The ongoing debate over AI accountability reflects the desire of some to act on legal standards that can address the complexities of AI-induced risks and incentivize stakeholders to proactively mitigate cybersecurity and safety risks. Most recently, the National Telecommunications and Information Administration released its AI Accountability Policy Report, which calls for increased transparency into AI systems and independent evaluations, among other recommendations. However, some skeptics express concerns, citing the need for balance and the potential harm that could arise if these efforts turn into a broad, top-down regulatory regime that inflicts hefty compliance and innovation costs.

Three proposed policy actions include:

  • Licensing Regime. Implement a licensing regime that requires organizations to obtain licenses or certifications demonstrating compliance with specified standards before working on AI systems and models. “High-risk” AI applications like facial recognition would require companies to obtain a government license that ensures they have rigorously tested their AI models for potential risks before deployment, disclosed harmful instances, and allowed audits of the AI models by an independent third party. For example, the Food and Drug Administration’s review process for approving AI-based medical devices requires rigorous premarket evaluation and continuous oversight to ensure the devices adhere to safety and efficacy standards. This approach could enhance AI accountability by increasing transparency and oversight, mandating that AI systems meet stringent security standards pre-deployment. Nonetheless, licensing regimes could stifle innovation by introducing bureaucratic delays and compliance costs, making it more difficult for smaller American companies and new entrants to succeed.
  • Company Liability Regime. This approach holds AI companies responsible if their systems and models cause harm or can be exploited to inflict harm. For example, Congress could hold AI companies liable through agency enforcement and private rights of action if their models or systems breach privacy. Increasing companies’ liability could incentivize them to prioritize AI safety, responsible AI, and cybersecurity considerations; advance accountability; and ensure compensation for damages caused by AI systems. Critics contend that rushing to implement company liability frameworks may introduce regulatory hurdles that stifle AI innovation and development and risk exploitation for financial gain. Congress has also proposed the preemptive elimination of Section 230 immunity protections for generative AI technology. While proponents of this approach argue it would empower consumers with tools to protect themselves from harmful content created by generative AI technology, critics maintain it would impede free speech, hobble algorithmic innovation, and inflict devastating economic consequences on the United States.
  • Tiered Liability and Responsibility Regime. Drawing upon ideas put forth in existing national cybersecurity strategies, this proposed update involves establishing a legal framework that recognizes the varying degrees of risk and responsibility associated with different AI applications. Under such a regime, companies would face different levels of liability and responsibility depending on the nature and severity of the harm caused by their AI systems. For instance, a company developing AI-powered medical diagnosis systems might face higher liability standards and reporting requirements due to the potentially life-threatening consequences of misdiagnosis than a company deploying AI for personalized advertising. While a tiered liability and responsibility regime provides flexibility and proportionality in assigning accountability, it may also lead to less transparency and ambiguity or inconsistency in legal enforcement. Moreover, larger companies may have an unfair advantage over new entrants or smaller companies.

While these proposed legal updates to advance AI accountability aim to have companies prioritize cybersecurity and AI safety considerations, each has drawbacks. These complexities underscore the need for continued discourse and informed decision-making among policymakers.

Conclusion
It is imperative to ensure that proposed and emerging policy actions to mitigate potential AI risks do not inadvertently stifle innovation or erode U.S. leadership in technological innovation. AI systems only exist within real-world parameters, and “when [they] go rogue, the implications are multidimensional.” To mitigate AI’s potential to impose amplified or new cybersecurity threats, policymakers should think of AI systems holistically—as technology that is inextricably linked and integrated with both disparate and overlapping ethical and legal frameworks. Incorporating risk tolerance principles into AI regulation and governance solutions is essential to ensure we are equipped to balance AI’s considerable rewards with its potential risks.

   



Source

Related Articles

Back to top button