Deepfakes and Artificial Intelligence
“The only thing we have to fear is fear itself,” are the timeless words of former U.S. President Franklin D. Roosevelt, spoken March 4th, 1933, during the Great Depression.
If Roosevelt were present today, he would undoubtedly recognize the gravity of the challenges posed by artificial intelligence (AI), particularly deepfakes (DFs), among an array of contemporary technological threats.
He would shine a light on the understandable fear and uncertainty surrounding AI among boards and management, as well as all stakeholders – investors, customers, employees and beyond. However, fear of the unknown is not the only concern. DFs create massive disruption and challenge the very fabric of authenticity and truth.
Throughout history, we witnessed an unfortunate pattern of evildoers embracing emerging technologies faster than governments or corporations. AI can fabricate most any reality. The stakes are colossal for everyone.
Deepfakes: One of the Greatest Existential Threats in 2024
From deceiving consumers to eroding trust, the potential threats from DFs are far-reaching and potentially devastating and destabilizing.
DFs come in multiple flavors — voice, photography and video. Each represents a distinct category of manipulation, establishing significant challenges to companies and organizations and in the political arena — particularly important in a year of a U.S. presidential campaign and election.
- Voice DFs manipulate audio recordings, utilizing AI algorithms to mimic voices and create deceptive audio clips, and are often utilized to spread misinformation or deceit.
- Photography DFs manipulate images or photographs, showing alternatives of faces or expressions or creating entirely fabricated images that are commonly employed to foster theft, spread propaganda or falsify evidence.
- Video DFs create digital footage to generate convincing yet fraudulent digital images, altering expressions or fabricating entire events. They are frequently used to spread misinformation, perpetuate fake news or manipulate public opinion.
Early U.S. election primaries this year should have set off alarms.
As New Hampshire voters were getting ready to cast ballots in early February, a false version of President Biden’s voice was victimized in robocalls to discourage Democrats from going to the polls. The political consultant allegedly behind the digital mischief reportedly paid a street magician only $150 to create a recorded message.
Business and Hollywood Are Not Immune
Criminals utilized AI software to pilfer a U.K.-based energy company. Hackers adeptly mimicked the CEO’s voice, persuading an unwitting executive to initiate an unauthorized transfer amounting to hundreds of thousands of Euros. The deceived executive believed he was conversing with the CEO of the parent company over the phone, unaware of AI’s role in replicating the voice.
Remarkably, this “new trick” may be easily available to all in no time. OpenAI just announced that it was evaluating a new system that can recreate a person’s voice from a 15-second recording. It is reportedly waiting to share the technology more widely as it begins to comprehend its potential dangers.
There are also a variety of DF videos, some persuasive, others peculiar and a few simply entertaining.
Some video DF examples include Taylor Swift promoting Le Creuset cookware in January 2024, and rapper Nicki Minaj and actor Tom Holland as a distraught couple recounting a home invasion by Mark Zuckerberg in July 2023. And an older DF includes Tom Cruise, Robert Downey Jr, George Lucas, Ewan McGregor and Jeff Goldblum discussing the future of cinema.
The evolution of DFs has reached new heights, exemplified by the emergence of a dedicated TikTok account solely dedicated to Tom Cruise DFs.
Blurring Reality with Alarming Precision.
In the business world, imagine the havoc wreaked by a fake video showing a CEO making disparaging remarks about an investor, another executive or a competitor, creating a potential catastrophe for the company’s reputation, as well as impacting the bottom line and equity value.
But the danger extends well beyond business and entertainment. Individuals, too, can fall victim to the insidious influence of these digital deceptions.
Picture the devastation of any public figure (or even a member of one’s own family) falsely depicted in a very compromising circumstance — reputation tarnished, career in ruins, family under pressure.
At ludicrous velocity, DFs can target all of us. Unfortunately, many corporate cultures, business leaders and their advisors are simply not prepared or trained to operate or respond at AI digital warp speed.
Without significant systemic change, many companies will suffer much more than their 15 minutes of shame.
Deepfake Technology Leaps Ahead
To combat these troll and hacker menaces, companies must prioritize investments in new comprehensive strategies utilizing advanced detection technologies. It is also imperative to collaborate with industry peers and engage policymakers to shape the regulatory framework to provide balance in innovation and accountability.
Organizations must arm themselves with essential defenses and develop new knowledge, skills and vigilance, with robust authentication measures and protective monitoring systems.
Silver Linings, Dark Clouds
Properly harnessed and managed DFs can be an especially useful tool. DFs can serve as a powerful vehicle for education and entertainment. Envision a future where this technology brings historical figures to life to recreate iconic moments with stunning accuracy, revolutionizing our engagement with historic events.
Even here, of course, it will be important to safeguard human creativity. Early in April, Billie Eilish, Jon Bon Jovi and Katy Perry joined more than 200 artists in signing an open letter calling out those developing AI-created sounds and images, which dilute royalties paid to artists.
The artists noted, “This assault on human creativity must be stopped. We must protect against a predatory use of AI to steal professional artists’ voices and likeness, violate creators’ rights and destroy the music ecosystem.”
As this open letter highlights, corporate “AI consciousness” must consider not only the threats from bad actors, but one’s own institution becoming part of the problem.
Combating Deepfakes Requires More Than Just a Technological Solution
Understanding what is happening in the public arena, why it is happening and how it can escalate — as well as what you need to do and not do — is crucial.
A fundamental shift in mindset, accompanied by a meticulous crafting of a new AI and cyberattack response crisis plan, is imperative. Last year’s strategies are outdated at best in the face of swiftly advancing technology.
Oversight by boards and management preparation are paramount in maintaining a secure and competitive edge in this rapidly changing environment. A trained and select team, led by a public-facing C-suite executive capable of addressing likely shareholder issues, is necessary to initiate a comprehensive crisis plan. Postponing it is no longer an option.
In the end, success in the public arena depends on preparation. As a wise man once said, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”
Furthermore, those who exploit DFs for malicious purposes must be held accountable. Strengthening legal frameworks and regulatory measures is essential to deter and punish those who engage in the creation and dissemination of deception.
Ultimately, the battle against DFs is not just about protecting business and reputations; it is about defending the very essence of truth and authenticity in our society. Preserving this ability to trust information we encounter and discern fact from fiction is crucial in our increasingly interconnected and splintered world.
A Destructive Force and a Game Changer
Indeed, one expert advises: DFs “provide innovative solutions for education and training by facilitating immersive simulations and personalized learning experiences. Additionally, deepfakes have the potential to revolutionize accessibility by generating customized content for individuals with disabilities, promoting inclusivity and equal access to entertainment.”
Who provided that hopeful message? AI itself. The quote was generated by ChatGPT. Let us pray it knows (as only AI can) what it is talking about.