How South-East Asia’s pig butchering scammers are using artificial intelligence technology
The video shows a young Asian man with stubbly facial hair and blond tips sitting in a gaming chair in a messy bedroom.
Over the next minute his face cycles through more than a dozen different genders and ethnicities.
It’s not just another new TikTok filter.
The video is advertising a real-time deepfake face-swapping system reportedly being employed by South-East Asian crime syndicates in so-called “pig butchering” cyberscam operations.
Experts say the technology and other new artificial intelligence (AI) tools — such as generative AI chatbots — are increasing the effectiveness of the scams and broadening their reach to new victims.
However, some of the scam operations appear to be having less success with the new tech than others.
Loading…
Proliferation of ‘pig butchering’
Since 2020, scores of predominantly Chinese-run call-centre style scam operations have sprung up across South-East Asia mostly in Cambodia, Myanmar and Laos.
Their trademark is “Sha Zhu Pan”or “pig butchering” scams in which victims are contacted through social media or text messages, befriended or seduced and then lured into fake investment schemes — usually cryptocurrency.
The syndicates initially employed and victimised mostly ethnic Chinese but are reportedly increasingly targeting people from different nationalities following a crackdown by Beijing.
Operating out of tall office buildings in fortified compounds, the typical team has “keyboarders” who chat with the victims via messaging apps, models who act as the face and voice of a scam and bosses who manage the operations.
Their proliferation has led to a massive surge in scam activity, with lives ruined across the globe and total losses in the billions of dollars.
And according to the authorities, AI is making their scams even more effective.
Earlier this year the United Nations Office of Drugs and Crime (UNODC) warned “recent advances in large language model-based chatbots, deepfake technology, and automation” had enabled “more sophisticated and damaging cyber fraud schemes”.
“By using artificial intelligence (AI) to create computer-generated images and voices that are virtually indistinguishable from real ones, scammers can execute social engineering scams with alarming success rates, exploiting people’s trust and emotions,” the organisation said.
‘A very, very, very twisted thing’
Many of those working these cyberscam operations are lured from other countries with promises of legitimate jobs before being forced to work in slave-like conditions. Escapees have reported being beaten and tortured.
Judah Tana is the director of Global Advance Projects, a Thailand-based NGO which has aided hundreds of trafficking victims who have escaped from scam compounds in Myanmar.
Mr Tana said the crime syndicates had made AI research and development a priority since “day one” and were willing to go to great lengths to get the most advanced technology.
He said some scam compounds in Myanmar were using advanced face-swapping tech.
“It’s not everywhere, but it is in some of the larger ones for sure, and they’re just always moving to increase and get better,” he told the ABC.
Among the people he had helped was a computer engineer whose sole job was AI development for the syndicates, he said.
Mr Tana and his associated partners aided her after she managed to slip away, despite being accompanied by security guards, during a visit to a coffee shop in northern Myanmar.
“She said [their technology] was more advanced than anything she had seen in the world, anything she had ever studied,” he said.
Mr Tana said to motivate the woman, compound managers had brought people into the room and beaten them in front of her.
“It’s a very, very, very twisted thing. But it’s not an isolated case,” he said.
Tech evolving fast
Initially only able to modify footage after it had already been shot, deepfake technology has been improving rapidly in recent years and can now work convincingly in real time.
In 2022, entertainment AI specialists Metaphysic wowed audiences by having a deepfaked Simon Cowell sing live to the real Simon Cowell on America’s Got Talent.
Ngô Minh Hiếu, a Vietnamese former hacker and identity thief turned cyber security specialist, said real-time face-swap technology — a type of deepfake which only changes the subject’s face — was now easily accessible for free as open-source software or through paid subscriptions.
The Global Anti-Scam Organisation (GASO) pointed the ABC to advertisements for face-swapping software on the encrypted messaging platform Telegram.
One provider’s Chinese-language advertisement seen by the ABC boasts that its “AI real-time face-changing” is “essential for precise chat” and “pig-butchering scams”.
“It solves the problem that good-looking [human] models are difficult to recruit and manage and are costly,” the advertisement says.
The software also integrates voice cloning which can imitate a target with “60 per cent to 95 per cent” similarity.
However, the ad warns that it only matches the timbre of the voice.
“You need to figure out the target person’s speaking rhythm, speaking habits, tone, stress, and tongue rolling,” it says.
Another provider of face-swapping software offers 24/7 support, “door-to-door delivery” in Cambodia and claims to have installed its products in more than 1,000 compounds.
The advertisement, also in Chinese-language, says the system only requires a single photo and can be used on video calls on messaging platforms including WhatsApp, Messenger, Line and others.
“Face touching will not affect realism,” it says.
“Eliminate the issue found in other software that can accidentally expose real human faces.”
Along with pig butchering, Mr Ngô said criminals also used face-swapping technology for other kinds of scams such as impersonating celebrities in investment scams and family or police to frighten victims into giving them money.
It could also be used to bypass video-based identity checks required by some financial institutions, he said.
In Australia, Sunshine Coast Mayor Rosanna Natoli recently reported that a fraudster had used the technology to impersonate her on a Skype call with one of her friends.
“What this friend told me is that this looked like me but didn’t sound like me,” Ms Natoli told ABC Sunshine Coast.
Real people still more ‘legit’
But while the software developers claim the technology is seamless and easy to use by “bosses” with no technical knowledge, people familiar with the industry have told the ABC that some compounds are currently sticking with more analogue methods.
A source who has direct knowledge of Cambodia’s scam compounds confirmed to the ABC that “AI models” were widely used there.
But he said limitations including technical ability, computing power and internet bandwidth often restricted scammers’ use of face-swapping.
“[In many cases] hiring a model from Eastern Europe is much more practical,” he said.
He said that despite advances in technology a real person on camera was still more “legit”.
“And most importantly, most face swapping apps can’t fake the real-time voices,” he said.
Sam, a Chinese national who until recently had been working in a cryptocurrency pig butchering operation targeting Americans and Europeans on WhatsApp, told the ABC his bosses briefly experimented with AI.
While working in the Kokang area in Myanmar’s north, the Vietnamese model who was acting as the “face” of the operation escaped.
Sam, who asked to use a pseudonym, said his team’s bosses tried using an app which they hoped would digitally alter an English-speaking member of the team’s appearance for video calls.
In the end, he said the bosses didn’t have enough source photos to get the app to work.
Not long afterwards, his team was forced to move to Sihanoukville in Cambodia where he said most of the scam companies in his compound were using Russian and Ukrainian models.
“I saw a lot of them when I was at [Sihanoukville],” he said.
Getting help from chatbots
In the past, pig butchering keyboarders have generally worked from detailed “scripts” as they attempt to get their victims’ trust.
Now they are reportedly also making use of AI large language models (LLMs), or chatbots, as they message victims in other languages.
While the major AI chatbots such as ChatGPT, Gemini and Copilot have guardrails that prevent them from assisting with illegal activities, other LLMs are being developed without any such restrictions.
Numerous software platforms offering AI-assisted translation integrated with messaging services are advertised on Telegram.
One of the systems has a feature that will automatically detect if a sent message accidentally contains Chinese to “avoid embarrassment”.
In a blog post, researchers from cybersecurity company Sophos said generative AI like LLMs could make conversations more convincing and reduce the workload for scammers interacting with multiple victims.
However, the researchers published a screenshot provided by a scam target in which a scammer mistakenly revealed that they were using an LLM to generate their messages.
The scammer’s WhatsApp message to a target said: “Thank you very much for your kind words! As a language model of ‘me’ I don’t have feelings or emotions like humans do but I’m built to give helpful and positive answers to help you.”
The researchers said the scammer had likely copied and pasted the text into the conversation.
“The combination of this edited block of text amongst otherwise grammatically awkward text was an artefact from a generative AI tool being used by the scammers,” they said.
Sophos threat researcher Jagadeesh Chandraiah, who co-authored the blog post, said the criminal syndicates did not yet appear to be able to completely automate the chatting process using AI.
“They still need humans facilitating as there is risk of bots occasionally sending out messages that could signal to victims that they aren’t communicating with humans, especially when it comes to feelings and emotions,” Mr Chandraiah told the ABC.
“In terms of pig butchering, currently the models are not very good at portraying emotions and feelings, which is key for this type of scam to work,” he added.
“With more advancements in AI, it’ll be difficult for victims to identify that they’re communicating with bots, especially for those who aren’t tech savvy.”
Mr Chandraiah said that the combination of generative AI producing text, images and video with translation would enable criminals to generate “bespoke content” that wasn’t repetitive and increase their reach.
“[This will make it] difficult for victims to reverse search to check if the person they’re communicating with is stolen from internet,” he said.
Mr Ngô said generative AI could also be used to write pig butchering scripts, compile convincing phishing emails or even provide step-by-step instructions on how to set up scam operations from scratch.
He said one of the major concerns was that AI technology was lowering the barrier for entry to conduct scams.
“A lot of [the criminals], they have no technical ability, they just need to buy the subscription and they go from there,” he said.
Not all bad news
Last month, the ACCC’s National Anti-Scam Centre said while more scams were reported in 2023 compared to 2022, the total amount of money lost dropped 13.1 per cent to $2.74 billion.
However, an ACCC spokesperson warned of the emergence of new technologies.
“Scamwatch continues to see growing sophistication in scam approaches and is alert to the risks AI presents,” they said.
“This makes scams harder for the community to identify. The community should continue to approach any requests for personal information or money with caution and exercise caution when clicking on hyperlinks.”
The spokesperson said most of the reports of scammers employing AI had so far been in the form of “chatbots” on social media sites.
“This is primarily occurring in relation to job scams and investment scams,” they said.
“The bots are used to give the impression that many other real people are interested in the product, and are receiving financial benefit from the scam.
“AI is [also] being used to create videos for investment scams, often capturing or creating images or footage of celebrities with audio endorsing the scam,” the spokesperson added.
Heads instead of hearts
Ronnie Tokazowski, a cybercrime expert at Intelligence for Good which works with scam victims, said in the past a key red flag to indicate someone was a scammer was that they would refuse to do video calls.
“But now the bad actors have figured ways around that — and it’s very, very scary with just how much they can do,” he said.
He said the best piece of advice he had was not to send or receive money to or from someone met only online.
Mr Chandraiah said the best countermeasure to pig butchering scams was more awareness, education and the ability to see red flags.
“Scammers pray on our emotions, and we need to start thinking with our head before our hearts,” he said.
“Even if it feels real and the relationship has been going for a while, if it originated from the internet and if it’s slowly turning towards matters involving money or cryptocurrency, steer well away from it.”
Additional reporting Huang Yan
Loading…
If you’re unable to load the form, click here.