Review: Faking It; Artificial Intelligence in a Human World by Toby Walsh
“This book is out of date.” Thus begins Toby Walsh’s Faking It: Artificial Intelligence in a Human World. Such a declaration is a rarity in a volume that aims to address pressing issues concerning the human species. In this case, the way artificial intelligence (AI) has disrupted everyday life.
Consisting of 10 chapters covering a range of issues from faking intelligence to faking creativity to sharing possible solutions to combat fakery, this book conveys complex ideas in accessible language. Contrary to its opening sentence, it is absolutely timely and also successful at highlighting the ability and limits of machine and predictive intelligence. Furthermore, it addresses the paranoia of our times — job security, technological surveillance, ethic-washing, etc – making it both “a guide and a warning.”
But before diving deep into these concerns, Walsh, Professor of Artificial Intelligence at UNSW and a Fellow of the Australian Academy of Science, notes how the phenomenon of faking intelligence is not a modern one. He cites “the Mechanical Turk” as an example. It “was an impressive device that toured Europe and the Americas from its debut in 1770 […] until its unfortunate destruction in a museum fire in Philadelphia in 1854.”
The first scientific meditation on mimicking human intelligence, however, is credited to Alan Turing. The English mathematician’s paper titled “Computing Machinery and Intelligence” is often cited as being pivotal to advancements in AI. Prosecuted for being homosexual, Turing died of cyanide poisoning a few weeks shy of his 42nd birthday.
While several milestones have been achieved in the past 70 years in AI research, there have also been a series of fake incidents. Walsh shares Expensify’s ‘SmartScan’ technology as an example. The software company had claimed to use AI to automatically process receipts but it turned out that “poorly paid humans” were performing the job of data transcription. There are plenty of such anecdotes in the book but what makes it an enriching read is that Walsh is an informed, sensitive, and politically-aware writer. Two of his arguments, which also signal his gendered reading of tech-related issues, particularly stand out.
First, take the machine-learning (ML) algorithm that Stanford University developed in 2018. It helped segregate “photographs of homosexual and heterosexual people.” Walsh rightly points out that given that there “are a dozen countries in the world that have the death penalty for homosexuality,” the AI gaydar is perhaps one of the worst uses of fake science.
Second, Walsh wants readers to question the names that organisations give to their personal assistants. Whether it is Apple’s Siri or Amazon’s Alexa, they are “almost always styled as women waiting to do your bidding. What does that say about our society?”
Alongside highlighting concerns, Walsh also focuses on what AI’s integration into our lives will help achieve. He notes that autonomous cars “will provide mobility to those who cannot drive” and that there will be “personalised AI tutors” for students in the future. For the former development, AI researchers had to grapple with object recognition for decades. They helped address it via deep learning. “But deep learning isn’t the only reason for AI tools making headlines today,” Walsh writes. He says that it’s another “vital ingredient” — the “transformer architecture for neural networks introduced in 2017 by a team of AI researchers from Google.”
You may have used ChatGPT. The “T” in the free-to-use Large Language Model (LLM)-based AI system developed by OpenAI stands for “transformer”, which has broken new ground in domains such as “computer vision, speech recognition, [and] natural language processing.” This progress signals that, in the future, we’ll be talking to machines and not “pointing, clicking or touching.” Machines will be intuitive; humans wouldn’t have to “repeat the context behind [their] questions and commands.”
READ MORE: Toby Walsh – “We fear that what we create will get the better of us”
All of which reminded me of Spike Jonze’s Her (2013) starring Joaquin Phoenix. He plays Theodore Twombly, who gets romantically attracted to a virtual assistant Samantha (vocals: Scarlett Johansson). The film helped people visualise and discuss the exciting future. Perhaps this was why ChatGPT’s announcement in 2022 was met with an overwhelming response. Walsh notes that it attracted “over a million users in the first five days after its launch, and 100 million unique visitors per month shortly after.” He compares ChatGPT’s hit response with Spotify and Instagram. While the former took five months, the latter registered a million users after two-and-a-half months. It isn’t without reason that Elon Musk tweeted “ChatGPT is scary good.” It had to be. The “complete contents of Wikipedia made up less than 1 per cent” of the text that was “poured into GPT-2” and the “total text consumed by GPT-3 is about 100 times what a human would read if they read a whole book every day of their life,” Walsh writes.
But did anyone consent to the use of the data that was fed to train such tools? If a tool can create a variety of narrative arcs based on prompts, then shouldn’t it strike us how it managed to do so? Such questions began being discussed the world over, leading to OpenAI’s data-collection practices coming under scrutiny. Furthermore, the organisation’s disregard towards copyright invited lawsuits from authors like Mona Awad and Paul Tremblay. Besides creators’ rights, the indiscriminate use of AI-based tools is a cause for concern. Rashmika Mandanna’s deep fake video and the alleged rape of a minor in the metaverse as reported by the Guardian in January 2024 are just some examples.
“Video and audio fakes are currently being generated using deep learning,” Walsh writes. From impersonations to the rigging of elections to fake profiles on dating apps, there’s an array of issues facing organisations and AI researchers working with generative AI and LLM-based tools. Walsh says that deep fakes “are perfect for catfishers wanting to create fake profiles to attract unsuspecting people. It is estimated that around one in 10 online dating profiles are fake, with ‘romance scams’ costing around US $50 million per year.”
Citing Europe’s Digital Service Act, which is “due to come into force”, he suggests that interference from governments and regulation — not outright censorship — could be the solution. According to Walsh, China has come up with “even stronger rules against deep fakes, driven by concern about their potential impact on societal ‘stability’.” Moreover, he warns that “we have barely begun to scratch the surface. The ultimate fake AI is still to arrive in our homes and offices.”
Walsh also calls out the hype around AI, which he believes is a result of “bad journalism” practices. To support his argument, in the chapter titled AI Hype, he shares five headlines put out by major news organisations, noting that such overculture may invite consequences. These are not always unamusing, though. Take for example, “the announcement about the yet-to-be-built Tesla Bot,” which was designed to avoid any “robot takeover”. Its principle characteristics would be its slowness and weakness, “so [anyone] can easily outrun and overpower it.” Walsh says that its release “featured a person dressed up in a white full-length bodysuit pretending to be a Tesla Bot.” He doubts anyone was fooled, making him conclude that most “claims being made about AI are overinflated.”
Two of the biggest and most widely accepted ones are mass layoffs and the rendering useless of human-oriented jobs. Walsh says the Covid-19 pandemic witnessed more hiring than layoffs. Furthermore, he writes, “Just two [that of elevator operators and locomotive firemen] of the 270 jobs in the 1950 US census have been completely eliminated by automation.” This underlines the fact that it’d be great for people to familiarise and better arm themselves with AI; instead of harbouring animosity towards it. According to Walsh, “It won’t be robots putting humans out of work, but humans who use AI taking over the jobs of humans who don’t.” Which is to say that efforts should be put in to streamline human and machine intelligence to co-create a better world.
One of the best examples of that is this book itself. Walsh has used ChatGPT to write a few paragraphs — and one can hardly tell. He did it partly to entertain himself and partly to investigate whether human readers get “fooled” into assuming that they were written by the author. This inherent playfulness not only helps strengthen Walsh’s arguments but is also a marker of his storytelling. His commitment to engaging with his readers can be seen in how he constantly shows and doesn’t tell. The images that DALL-E — OpenAI’s tool “that can create realistic images and art from a description in natural language” — created based on his prompt “Cat in the style of a Picasso cubism painting” in the book are a case in point. While computational creativity has its limits, as Walsh argues, the advantage humans have is that of “collective evolutionary” learning. Though there are immense challenges facing AI researchers and its beneficiaries, Walsh remains hopeful because he believes “the beauty of science is that it is self-correcting. Mistakes will be identified and corrected.”
Saurabh Sharma is a Delhi-based writer and freelance journalist. On Instagram/X: @writerly_life.