New AI Guides Outline Has Lessons for Tech and Cybersecurity Pros
Across nearly every industry, interest in artificial intelligence—especially generative A.I. platforms and tools—continues to grow and drive investment. A Forrester report predicts a 36 percent average annual growth rate between now and 2030 for products and services such as OpenAI’s ChatGPT and Google Gemini.
A bipartisan group of U.S. lawmakers are also recommending the federal government spend billions to develop generative A.I. technologies.
Despite the promises generative A.I. holds, there are risks and security concerns as it finds its way into more facets of business and government. At the recent RSA Conference in San Francisco, FBI officials warned that agents are responding to an increasing number of cyber threats targeting enterprises and government agencies that utilize these technologies, according to the Wall Street Journal.
To help bridge the gap between responsible development and understanding the risk, the National Institute of Standards and Technology released four draft guideline papers designed to “improve the safety, security and trustworthiness” of A.I. and generative A.I. platforms, according to the U.S. Department of Commerce, which oversees NIST. The documents are based on and help expand the AI Risk Management Framework (AI RMF), designed to help manage risks associated with generative A.I.
The publication of these documents is also part of the Biden administration’s efforts to oversee and bring some regulation to the development of A.I. technologies.
“These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation,” Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio noted in a statement.
Many enterprises remain in the early testing stages of generative A.I. tools. In the coming years, however, as larger-scale deployments become more common, security professionals—as well as developers—will be tasked with understanding the implications of A.I. technologies.
Before this happens, tech pros must start absorbing the lessons NIST and other agencies produce about generative A.I. This is also crucial for career development and keeping skills sharp and current, especially as A.I. adoption becomes widespread. Organizations will need to seek out tech pros for guidance and help to create policies and safety guidelines.
“Organizations should not wait for broader regulation to start developing their own rules around A.I. Leaders can look at their individual business policies and consider how their employees could and should be leveraging these tools,” Nicole Carignan, vice president of strategic cyber A.I. at security firm Darktrace, recently told Dice. “By evaluating their specific business risks and opportunities, organizations can set nuanced, granular policies around generative A.I. use. Industry collaboration is especially crucial at a time when there is an absence of fully developed A.I. regulations.”
As NIST circulates these documents for public comments before final publication, several cybersecurity experts and insiders offered their take on A.I., the risks associated with these platforms and what tech pros can learn to help their career development as interest in generative A.I. grows.
Understanding the Risks and Rewards of Generative A.I.
Each of the four NIST draft guides published on April 29 covers various aspects of A.I., technologies, the risks associated with these platforms and how best to mitigate cybersecurity issues. The papers include:
Each of these guides offers new insights into how enterprises—as well as their IT and security teams—must approach the risks associated with generative A.I. technologies. For example, the NIST AI 600-1 document demonstrates how to harness generative A.I. to preempt A.I.-enhanced threats such as business email compromise (BEC), social engineering, and sophisticated phishing campaigns, said Stephen Kowski, field CTO at security firm SlashNext.
At the same time, NIST AI 100-4 provides essential guidance on leveraging generative A.I. to enhance defenses against sophisticated threats like BEC, Business Text Compromise (BTC) and social engineering, Kowski added.
“We see opportunity in these standards by adhering to NIST AI 100-4’s recommendations on labeling and identifying A.I.-generated content. Our company can refine our phishing detection algorithms, significantly enhancing our ability to differentiate between legitimate and malicious communications,” Kowski told Dice. “This alignment not only bolsters our defenses against conventional phishing but also equips us to counter advanced generative A.I. threats involved in social engineering, BEC and BTC attacks—turning generative A.I. into a powerful tool for cybersecurity.”
As tech pros become more familiar with A.I. technologies and absorb lessons from NIST or analysis of real-life threats, it’s important to ensure that the learnings are passed along throughout the organizations, said Craig Jones, vice president of security operations at Ontinue. In turn, this requires developing skills that build a multi-layered security approach and understanding critical issues such as data encryption at rest and in transit, strict access controls and continuous monitoring for anomalies.
“In case of a breach, rapid response, and remediation measures need to be in place, along with clear communication to affected stakeholders following the legal and regulatory requirements,” Jones told Dice. “The lessons learned from such incidents should be integrated into improving the data security framework to be better prepared for future scenarios.”
Creating A.I. Security Buy-In Throughout an Organization
While generative A.I. is still in the beginning stages of adoption, it’s critical to train employees across multiple lines of businesses about how the technology works, the benefits and risks associated with these platforms.
In turn, tech professionals must work with departments looking to adopt generative A.I. to ensure uniform, organization-wide policies that reduce risk are in place.
“Generative A.I. ultimately requires nuanced usage policies to help manage the risk. A variety of stakeholders across the business including risk and compliance teams, chief people and HR officers, CIOs, CISOs, chief A.I. officers, data executives and strategy leaders should be working together to create and implement A.I. policies,” said Darktrace’s Carignan. “Each role will bring a unique view to the issue and collaboration will ensure the benefits of A.I. can be safely and securely realized while managing and mitigating risks.”
Other experts, including Callie Guenther, senior manager for cyber threat research at Critical Start, also noted that organizations and their tech teams need to provide clear guidelines on generative A.I. usage to ensure that it maximizes productivity gains while safeguarding sensitive data and maintaining compliance.
This includes developing policies and skills around effective communication and collaboration among CISOs, security teams, and business leaders to ensure the technology adoption is aligned with business objectives and security requirements.
“Engaging in strategic investments and collaborations highlights an organization’s commitment to addressing generative A.I. security concerns. Commitment to rapid innovation in response to the evolving GenA.I. landscape is necessary,” Guenther told Dice. “Developing technology that can handle new and emerging threats positions a company at the forefront of the data security industry. This approach helps organizations harness the benefits of GenA.I. while minimizing risks.”