Cybersecurity

What the EU AI act means for cybersecurity teams and organizational leaders


On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act (AI Act), establishing the world’s first extensive legal framework dedicated to artificial intelligence. This imposes EU-wide regulations that emphasize data quality, transparency, human oversight, and accountability. With potential fines reaching up to €35 million or 7 percent of global annual turnover, the act has profound implications for a wide range of companies operating within the EU.

The AI Act categorizes AI systems according to the risk they pose, with stringent compliance required for high-risk categories. This regulatory framework prohibits certain AI practices deemed unacceptable and meticulously outlines obligations for entities involved at all stages of the AI system lifecycle, including providers, importers, distributors, and users.

For cybersecurity teams and organizational leaders, the AI Act marks a vital transition phase requiring immediate and strategic action to align with new compliance norms. Here are several pivotal areas of focus for organizations:

1.    Conducting Thorough Audits of AI Systems

Periodic audits are mandated under the EU AI Act, requiring organizations to regularly verify that both the AI software providers and the organizations themselves are upholding a robust quality management system. This involves carrying out detailed audits to map and categorize AI systems in line with the risk categories specified by the Act.

These external audits scrutinize the technical elements of AI implementations and examine the contexts in which these technologies are used. This includes the practices surrounding data management to ensure adherence to the standards for high-risk categories. The audit process includes providing a report to the AI Software Provider and may involve further testing of AI systems that have been certified under the Union’s technical documentation assessment. A more specific scope of these audits has yet to be clarified.

It’s essential to recognize that Generative AI, integral to the supply chain, shares similar security vulnerabilities with other web apps. For these AI security risks, organizations can turn to established open-source resources. The OWASP CycloneDX provides a comprehensive Bill of Materials (BOM) standard, enhancing capabilities for managing AI-related cyber risks within supply chains.

Current frameworks such as OVAL, STIX, CVE, and CWE, which are designed to classify vulnerabilities and disseminate threat information, are being refined to enhance their relevance for emerging technologies like Large Language Models (LLMs) and Predictive Models.

As these enhancements progress, it is expected that organizations will use these well established and known systems for AI models too. Specifically, CVE will be utilized for the identification of vulnerabilities, while STIX will play a crucial role in the distribution of cyber threat intelligence, aiding in the effective management of risks associated with AI/ML security audits.

2.    Investing in AI Literacy and Ethical AI Practices

Understanding AI’s capabilities and ethical implications is crucial for all levels of an organization – including the users of these software solutions.

According to Tania Duarte and Ismael Kherroubi Garcia of the Joseph Rowntree Foundation, ethical AI practices should be promoted to guide the development and use of AI in ways that uphold societal values and legal standards and “the absence of a concerted effort to enhance AI literacy in the UK means that public conversations about AI often do not start from pragmatic, factual evaluations of these technologies and their capabilities”.

3.    Establishing Robust Governance Frameworks

Organizations must develop robust governance frameworks to manage AI risks proactively. These frameworks should include policies and procedures that ensure continuous compliance and adapt to evolving regulatory landscapes. Governance mechanisms should make risk assessment and management easier, but also incorporate transparency and accountability, essential for maintaining public and regulatory trust.

OWASP’s Software Component Verification Standard (SCVS) supports a community-led initiative to define a framework that includes identifying necessary activities, controls, and best practices to mitigate risks associated with AI software supply chains. This could be a starting point for anyone looking to develop or enhance their AI governance framework.

4.    Adopting Best Practices for AI Security and Ethics

Cybersecurity teams must be at the forefront of adopting best practices for AI security and ethics. This involves securing AI systems from potential threats and ensuring that ethical considerations are integrated throughout the AI lifecycle. Best practices should be informed by industry standards and regulatory guidelines, tailored to the specific contexts of an organization.

The OWASP Top 10 for LLMs (AI workload) applications aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models. The project provides a list of the top 10 most critical vulnerabilities often seen in LLM applications, highlighting their potential impact, ease of exploitation, and prevalence in real-world applications.

5.    Engaging in Dialogue with Regulators

To further the understanding and effective implementation of the AI Act, organizations should engage in ongoing dialogue with regulators. Participating in industry consortia and regulatory discussions can help organizations stay abreast of interpretative guidance and evolving expectations, while also contributing to the shaping of practical regulatory approaches.

If you are still unsure how the upcoming regulation will affect your organization, the official website of the EU AI Act has provided a compliance checker to determine whether or not your AI system will be subject to the regulatory standards.

The EU AI Act is a transformative legislative measure that sets a global benchmark for AI regulation. For cybersecurity teams and organizational leaders, it presents both challenges and opportunities to pioneer in the realms of AI security and compliance. By embracing a culture of transparency, responsibility, and proactive risk management, organizations can not only comply with the AI Act but also lead by example in the responsible use of AI technologies, thus fostering a trustworthy AI ecosystem.

Image Credit: Tanaonte / Dreamstime.com

Nigel Douglas, Snr. is Developer Advocate, Open Source Strategy, Sysdig.





Source

Related Articles

Back to top button