As generative AI arrives on Instagram and WhatsApp, will chatbots improve social media, or do they pose a threat?
As Meta’s moves suggest, generative AI is making its way into social media.
TikTok has an engineering team focused on developing large language models that can recognise and generate text, and they are hiring writers and reporters who can annotate and improve the performance of these AI models.
TikTok and Meta did not respond to a request for comment, but AI experts said social media users can expect to see more of this technology influencing their experience – for better or possibly worse.
Part of the reason social media apps are investing in AI is that they want to become “stickier” for consumers, says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania in the United States who teaches entrepreneurship and innovation.
Apps like Instagram try to keep users on their platforms for as long as possible because captive attention generates ad revenue, he says.
In the future AI will not just personalise user experiences, says Jaime Sevilla, director of Epoch, a research institute that studies AI technology trends.
In autumn 2022, millions of users were enraptured by Lensa’s AI capabilities as it generated whimsical portraits from selfies. Expect to see more of this, Sevilla says.
“I think you’re gonna end up seeing entirely AI-generated people who post AI-generated music and stuff,” he says. “We might live in a world where the part that humans play in social media is a small part of the whole thing.”
Sevilla says generative AI probably will not supplant the digital town square created by social media. People crave the authenticity of their interactions with friends and family online, he says, and social media companies need to preserve a balance between that and AI-generated content and targeted advertising.
Although AI can help consumers find more useful products, there is also a dark side to the technology’s allure that can teeter into coercion, Sevilla says.
“The systems are gonna be pretty good at persuasion,” he says.
A study just published by AI researchers at the Swiss Federal Institute of Technology Lausanne found that Open AI’s large language model GPT-4 was 81.7 per cent more effective than a human at convincing someone in a debate to agree.
While the study has yet to be peer reviewed, Sevilla says the findings were worrisome.
“That is concerning that [AI] might significantly expand the capacity of scammers to engage with many victims and to perpetrate more and more fraud,” he says.
Sevilla adds that policymakers should be aware of AI’s dangers in spreading misinformation as the US heads into another politically charged voting season this autumn.
Other experts warn that it is not if, but how AI might play a role in influencing democratic systems across the world.
Bindu Reddy, chief executive and co-founder of Abacus.AI, says the solution is a little more nuanced than banning AI on our social media platforms – bad actors were spreading hate and misinformation online well before AI entered the equation.
For example, human rights advocates criticised Facebook in 2017 for failing to filter out online hate speech that fuelled the Rohingya genocide in Myanmar.
In Reddy’s experience, AI has been good at detecting things such as bias and pornography on online platforms. She has been using AI for content moderation since 2016, when she released an anonymous social network app called Candid that relied on natural language processing to detect misinformation.
Regulators should prohibit people from using AI to create deepfakes of real people, Reddy says. But she is critical of laws like the European Union’s sweeping restrictions on the development of AI.
In her view it is dangerous for the US to be caught behind competing countries, such as China and Saudi Arabia, that are pouring billions of dollars into developing AI technology.
Sevilla acknowledged that AI moderators can be trained to have a company’s biases, leading to some views being censored. But human moderators have shown political biases too.
For example, in 2021 The Los Angeles Times reported on complaints that pro-Palestinian content was made hard to find across Facebook and Instagram. And conservative critics accused Twitter of political bias in 2020 because it blocked links to a New York Post story about the contents of the laptop of President Joe Biden’s son, Hunter Biden.
“We can actually study what kind of biases [AI] reflects,” Sevilla says.
Still, he says, AI could become so effective that it could powerfully oppress free speech.
“What happens when all that is in your timeline conforms perfectly to the company guidelines?” Sevilla says. “Is that the kind of social media you want to be consuming?”