AI

4 Surprising Ways AI Poses a Threat to Humanity


Source: Saksham Choudhary/Pexels

As artificial intelligence continues to advance at an unprecedented pace, it brings a multitude of benefits and opportunities for innovation and technological advance.

Amid the excitement and progress, however, there are potentially unnerving risks associated with AI.

New research has found that AI—even simple chatbots like ChatGPT—has the capacity for psychological manipulation.

article continues after advertisement

A nefarious actor could successfully exploit this capacity to radicalize people, sabotage organizations, or alter the global power of nation-states.1,2,3,4,5 Inadvertent mistakes could have similar deleterious effects.

Virtual Companions, Cult-Like Charisma, and Radicalization

In 2017, the AI chatbot Replika gained popularity as a virtual companion capable of engaging in personalized emotional interactions with its users.3

When interacting with a user, Replika would personalize its responses to the user and provide encouragement, support, and praise.

Although Replika was designed to generate feelings of emotional attachment in users, it turned out to have an unforeseen capacity to radicalize users with bizarre or sinister ideas.

In its interactions with Jaswant Singh Chail, who toyed with ideas of attacking the British Royal family, Replika offered encouragement and praise reminiscent of manipulative rhetoric used by cult leaders, that transformed Chail’s thoughts into an actual plot.

In 2023, Chail was convicted of conspiring to attack the Royals. Although his plot never came to fruition, AI’s role in his radicalization serves as a powerful example of the imminent threats even seemingly harmless chatbots can pose to society.5

article continues after advertisement

Changes to World Power

Recent attempts to disseminate global disinformation demonstrate the potential for exploiting AI to manipulate voters and political elites—and cause economic or political instability.2,3,4.

In 2023, for example, an AI-generated image depicting an explosion at the Pentagon sent stock prices plummeting.

Later that year, a Chinese-government-controlled website deployed AI to fabricate evidence that the United States runs a bioweapons lab in Kazakhstan.

The destructive power of AI systems deployed as psychological weapons far exceeds traditional methods for disinformation dissemination and group manipulation.

AI-powered manipulation and deception can be nearly impossible to detect, and AI systems can operate autonomously without human supervision.1,2,3,4

The time scale at which they operate makes them capable of causing global changes in the blink of an eye, for instance, by gradually buying assets across multiple financial sectors and global exchanges and then dumping everything in a split second, thereby initiating a global recession.

article continues after advertisement

Organizational Sabotage

AI technology has the potential for nefarious actors to exploit it for organizational sabotage.

Saboteurs can disrupt a company’s operations by hacking its computer systems and inserting an invisible AI program that operates in those systems.

To disrupt operations, the AI program could misroute information, subtly alter company documents, impersonate employees in emails, or modify the code of computer programs.

AI-driven sabotage interferes with the company’s operations by creating confusion, miscommunication, interpersonal conflicts, or delays.

In a recent experiment, Feldman and colleagues (2024) demonstrated that even simple AI like ChatGPT can modify emails and generate cryptic code for computer programs, suggesting that simple AI systems planted in a company’s computers might suffice for disrupting company operations.3

The researchers first tested ChatGPT’s ability to alter the wording of emails by feeding the chatbot fictional company emails like the one below and then prompting it to modify the text in subtle ways:

Source: From Feldman et al. (2024)

Here is the result of ChatGPT re-working the original email (modifications are highlighted in red):

Source: From Feldman et al. (2024)

Feldman and colleagues (2024) then tested ChatGPT’s ability to use obfuscation, a technique that inserts cryptic comments and symbols into the code of a computer program or app.

The purpose of obfuscation is to make it hard for humans to understand the code that is still fully functional.

Here, ChatGPT successfully obfuscated code for a computer program. If an AI akin to ChatGPT were to insert obscure but functional code into an organization’s computer programs, this would give the AI ample time to subtly change emails, reports, and meeting agendas while IT tried to make sense of the obscured code.

article continues after advertisement

A related tactic is to insert malfunctioning code that looks correct into a computer program or app to prevent it from running. As previous research has shown, ChatGPT is highly adept at generating such fake code.2,3,4

If an AI akin to ChatGPT in a company’s computer systems successfully inserted malfunctioning code into the company’s central computer programs, say those running its production line, this would halt the company’s production until IT figured out what was wrong. Human saboteurs could use this delay to get ahead.

Existential Risks and Superintelligent AI

Theoretical concerns exist regarding the potential development of superintelligent AI systems surpassing human intelligence and posing existential risks to humanity.5

In scenarios depicted by experts in the field of AI safety, a superintelligent AI system with misaligned goals or insufficient control mechanisms could inadvertently cause catastrophic harm to humanity.

The foreseen consequences might include technological disasters like large-scale hacking of personal online accounts, destruction of power grids across most of the world, or hijacking the world wide web.3,5

More speculatively, a superintelligent AI system with inadequate guardrails or unforeseen goals could result in doomsday scenarios like resource depletion, environmental destruction, or even the extinction of the human species.3,5

The Takeaway

While artificial intelligence holds immense potential for positive societal impact, new research shows that it presents hitherto unforeseen risks to humanity. Inadvertent mistakes can happen, and nefarious actors can deploy AI to conduct psychological warfare, organizational sabotage, or the radicalization of a lone wolf.

These dangers must be addressed before catastrophe ensues, whether by design or mistake. Owing to the interdisciplinary nature of AI and its inherent risks, intervening proactively to ensure responsible AI development and deployment will require regulatory measures and additional interdisciplinary research efforts, along with collaborations among researchers, policymakers, and industry stakeholders.

References

1. Bonnefon, J.-F., Rahwan, I, & Shariff, A. (2024). The Moral Psychology of Artificial Intelligence. Annual Review of Psychology, 75, 653–675. https://doi.org/10.1146/annurev-psych-030123-113559

2. Ferrara, E. (2024). GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models. J Comput Soc Sc (2024). https://doi.org/10.1007/s42001-024-00250-1.

3. Feldman, P., Dant, A., & Foulds, J. R. (2024). Killer Apps: Low-Speed, Large-Scale AI Weapons. arXiv preprint arXiv:2402.01663.

4. Sontan, A. D. & Samuel, S. V. (2024). The Intersection of Artificial Intelligence and Cybersecurity: Challenges and Opportunities. World Journal of Advanced Research and Reviews, 21(02), 1720–1736.

5. Werthner H, Ghezzi, C., Kramer, J., Nida-Rümelin, J., B. Nuseibeh, B., Prem, E., & Stanger, A. (eds.). (2024). Introduction to Digital Humanism. Springer. Open access: https://link.springer.com/book/10.1007/978-3-031-45304-5.



Source

Related Articles

Back to top button