How AI referees are shaking up football
Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.
Video assistant referees (VAR), a semi-automated AI system, will support human referees at this year’s UEFA Euro football tournament. Ten cameras above the pitch track 29 locations on each player’s body in real time, and the ball contains a sensor that notes its location and movement, 500 times per second. “One major application is detecting violations of the offside rule,” explains sports physicist John Eric Goff. The technology isn’t without detractors: inconsistencies in how referees apply VAR and the time they sometimes take to make decisions has fueled discontent. “Things such as fouls and yellow or red cards still require human decision-making,” adds Goff.
Elephants seem to use names to address their fellows, a habit that was thought to be unique to humans. A machine-learning model analysed 469 deep rumbles made by wild female African savannah elephants (Loxodonta africana) and their families. The algorithm could identify which elephant was being addressed almost 30% of the time — a much higher success rate than when the model was fed with random audio. This suggests that calls are specific to individuals. When recordings were played in the field, elephants that heard their ‘names’ became more vocal and moved more quickly towards the speaker than when they heard rumbles directed at other elephants.
Reference: Nature Ecology & Evolution paper or read the authors’ own summary in the Nature Ecology & Evolution research briefing (6 min read)
People have only been using the word ‘bot’ in a derogatory manner for seven years. An analysis of 22 million Twitter posts shows that, before 2017, the word usually meant content suspected to come from a piece of software. “The accusations have become more like an insult, dehumanising people, insulting them, and using this as a technique to deny their intelligence and deny their right to participate in a conversation,” says social scientist and study co-author Dennis Assenmacher. The shift could have been caused by media reports about political bot networks influencing major events like the [2016] US election, Assenmacher speculates.
Reference: Proceedings of the Eighteenth International AAAI Conference on Web and Social Media paper
Image of the week
In this simulation, the machine-learning model that controls a robotic exoskeleton learns how to help, rather than hinder, its user. After eight hours of virtual training, the model learns how to walk, run and climb stairs. Adapting an exoskeleton to one person’s gait — let alone to various types of movements — usually requires time-consuming real-world tuning. (Nature | 6 min read, Nature paywall)
Features & opinion
Students are rarely asked for their opinion on universities’ AI-use policies, says interdisciplinary researcher-practitioner Maja Zonjić. She decided to involve her students in crafting the one used in their own course. The exercise helped students think more critically about the technology and in turn helped Zonjić, a self-proclaimed AI skeptic, discover how chatbots might be useful in the classroom. Debating with students about how to use AI tools ethically demonstrated the complex relationships between the people who develop, profit from, build and use new technologies, Zonjić says.
AI tools can help to extract hidden information from often-messy real-world data such as digital health records, medical images or tumour samples. This handy guide has tips on how cancer researchers can benefit from AI tools. Some open-source software, for example QuPath or ilastik, have intuitive interfaces for analysing microscopy images that require no programming skills. For those wanting greater flexibility, programming languages such as Python allow researchers to interact with deep-learning architectures.
Nature Reviews Cancer | 56 min read
AI technology promises to automatically detect security threats faster than any person could — and for a low price that US public schools can afford. But schools need to understand that algorithms can’t offer certainty, says data scientist David Riedman, who has analysed thousands of shootings on US campuses. An image-classification model could class an umbrella as a gun with 90% probability while a partially obscured gun might only score 60%. “Do you want to avoid a false alarm for every umbrella, or get an alert for every handgun?” Riedman asks. Instead of continuously fortifying school buildings, school security should start imagining a better future, he says.
Today, I can’t help but be impressed by the photographer who was disqualified from the AI category of a prestigious photo competition. It turns out that his surreal image of a headless flamingo was, in fact, entirely human-made.Your anthropogenic feedback is always welcome at ai-briefing@nature.com.Thanks for reading,
Katrina Krämer, associate editor, Nature Briefing
With contributions by Flora Graham and Smriti Mallapaty
Want more? Sign up to our other free Nature Briefing newsletters:
• Nature Briefing — our flagship daily e-mail: the wider world of science, in the time it takes to drink a cup of coffee
• Nature Briefing: Microbiology — the most abundant living entities on our planet – microorganisms — and the role they play in health, the environment and food systems.
• Nature Briefing: Anthropocene — climate change, biodiversity, sustainability and geoengineering
• Nature Briefing: Cancer — a weekly newsletter written with cancer researchers in mind
• Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma