Cybersecurity experts face AI risks, deepfakes, burnout
You may not know it, but the cybersecurity world is about to have its Super Bowl. More than 40,000 people from over 130 countries will descend on San Francisco the week of May 6 for the 33rd annual RSA Conference on cybersecurity. This will be my 16th year as chairman of RSA Conference, and there is an intensity and urgency ahead of this year’s event that I’ve never seen before. To understand why, my team analyzed the thousands of speaker submissions coming into the conference from the world’s defenders of cyberspace. Three themes stood out: artificial intelligence, information manipulation, and career burnout.
New AI technologies come with new risks
As the footprint of AI expands in business and society (nearly one in every five speaker submissions focused on it this year), every industry is trying to figure out how to harness the power of AI-powered systems. Concurrently, security professionals are discovering new risks. One such risk is that these systems may somehow leak company and user data. Another concern is accuracy. Large language model (LLM)–powered systems are probabilistic, which means that you might ask the same question several times and get slightly to meaningfully different answers each time. That might be okay for generating a short story, but what if your new AI-powered customer service chatbot occasionally provides wildly inaccurate or fictional information to customers?
In cybersecurity, we try to address risks with compensating controls: technologies and processes to constrain or mitigate these risks. The challenge is that many of these AI technologies are new, and the right compensating controls to manage the emerging risks are just now being built. Additionally, there is concern around AI regulation. Several countries have recently published AI guidance or issued regulations, with prominent examples being the European Union AI Act and the U.S. White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. A tweak in future regulations—a restriction prohibiting these AI systems from reasoning about a customer’s emotional state, for example—might lead to the unwinding of AI-powered customer support chatbots.
The crisis of information manipulation
A few short years ago, one needed both technical acumen and intent to build deepfakes; now you only need intent. From a societal perspective, cybersecurity experts are concerned that the upcoming U.S. presidential election will spawn a tidal wave of deepfakes to sway public opinion. From a business perspective, deepfakes have supercharged cybercriminals’ ability to conduct fraud. In one recent example, a finance worker with a large multinational company in Hong Kong was on a video conference with a set of co-workers, and during that meeting was asked to wire $25 million out of the company as part of a transaction. Unfortunately, those trusted co-workers were actually deepfakes, synthetic representations of real employees controlled by a fraudster.
Information manipulation concerns go way beyond altered video and audio. One recent insidious example comes from the software world, where bad actors were able to implant a backdoor in a very commonly used application called XZ Utils. Had this software implant not been discovered by a software developer at Microsoft, tens of thousands of companies could have been compromised.
Burnout is spiking—again
In addition to the challenges of AI and the manipulation of data, the cybersecurity community has seen a rash of high-profile ransomware attacks, like the one that shut MGM resorts late last year. We looked back at the 10,000-plus speaker submissions over the past five years, and the topic of “burnout” spiked twice. The first was 2021 as COVID surged and cyber workers had to quickly adapt to securing a completely remote workforce. The burnout topic then receded to normal levels in 2022 and 2023 but has spiked once again in 2024. It is not just the recent wave of attacks that is weighing on cybersecurity professionals; there is a growing concern that Chief Information Security Officers (CISOs) could face personal liability for corporate breaches. Two cases in particular have raised the specter of such liability, and there is new pressure on businesses to rapidly report details of a compromise.
The power of community
Walk through your day and think of all the touchpoints you have with technology. Your car is a computer, your bank is an app on your phone—tech is everywhere, which means hackers are everywhere, too. I’ve spent my entire career in cybersecurity: from writing some of the early books on how to find software vulnerabilities, to teaching computer security at Columbia University, and through my time as the CTO of Symantec. What most people don’t appreciate about cybersecurity professionals is that we are a part of a mission-driven community. The attackers often work in near isolation; cyber pros collaborate. The elite of the world’s cybersecurity community are about to gather at RSA Conference, but it is more than a gathering. It’s the convening of a community.
Hugh Thompson, Ph.D., is executive chairman of RSA Conference.
More must-read commentary:
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.