The Epoch AI Researcher Trying to Glimpse the Future of AI
Imagine if the world’s response to climate change relied solely on speculative predictions from pundits and CEOs, rather than the rigorous—though still imperfect—models of climate science. “Two degrees of warming will arrive soon-ish but will change the world less than we all think,” one might say. “Two degrees of warming is not just around the corner. This is going to take a long time,” another could counter.
This is more or less the world we’re in with artificial intelligence, with OpenAI CEO Sam Altman saying that AI systems that can do any task a human can will be developed in the “reasonably close-ish future,” while Yann LeCun, Chief AI Scientist at Facebook, argues that human-level AI systems are “going to take a long time.”
Jaime Sevilla, a 28-year-old Spanish researcher, is trying to change that. It is far from clear whether and how the capabilities of the most advanced AI systems will continue to rapidly progress, and what the effects of those systems will be on society. But given how important AI already is, it’s worth trying to bring a little of the rigor that characterizes climate science to predicting the future of AI, says Sevilla. “Even if AI innovation stopped, this is already a technology that’s going to affect many people’s lives,” he says. “That should be enough of an excuse for us to get serious about it.”
Read More: 4 Charts That Show Why AI Progress Is Unlikely to Slow Down
To do this, in 2022 Sevilla founded Epoch AI, a nonprofit research organization that investigates historical trends in AI and uses those trends to help predict how the technology might develop in the future. “We want to do something similar for artificial intelligence to what William Nordhaus, the Nobel laureate, did for climate change,” he says. “He set the basis for rigorous study and thoughtful action guided by evidence. And we want to do the same. We want to follow his footsteps.”
Sevilla grew up in Torrejón de Ardoz, an industrial suburb of Madrid. His early interest in technology led him to pursue degrees in mathematics and computer engineering from the Complutense University of Madrid. There, he inadvertently sowed the first seeds of Epoch AI—in his first year, he returned to his high school to give a presentation about rationality and artificial intelligence, making an impression on Pablo Villalobos, a student in the audience who would go on to be Epoch AI’s first volunteer-employee.
In 2020, Sevilla began a Ph.D. in artificial intelligence at the University of Aberdeen. Trapped at home by COVID-19 restrictions and feeling out of place as a sun-loving Spaniard in gloomy Scotland, he had time to think more seriously about where AI might be headed. “Surprisingly, there was no one doing systematic analysis of what has been the trend in machine learning over the last few years,” he says. “I thought: well, if nobody’s doing it, then I better get to it.”
He and Villalobos began spending their spare hours pouring over hundreds of academic papers, documenting the amount of computational power and data used to train AI models of significance. Feeling confident in the importance of this work but daunted by the size of the task, Sevilla put out a call for volunteers, the respondents to which went on to become the initial Epoch AI team. Together, the small group documented the critical inputs of every significant AI model ever created. When they published their findings in early 2022, the reaction was overwhelmingly positive, with the paper going viral in certain internet niches. Encouraged, Sevilla paused his Ph.D., sought funding from philanthropic donors, and in April 2022 Epoch AI was born.
Since then, the organization, where Sevilla is director, has grown to 13 employees, scattered around the world. Team morale is maintained through a vibrant Slack culture and occasional retreats at which the small team strategizes and sings karaoke. It’s a humble operation that only professionalized two years ago, but already Epoch AI’s work has been widely used by governments trying to make sense of AI’s rapid progress. The Government of the Netherlands’ vision on generative AI cites Epoch AI’s work, as does a U.K. government-commissioned report that aims to synthesize the evidence on the safety of advanced AI systems.
Two of the most significant efforts to put guardrails around the advanced AI models—the E.U. AI Act and President Joe Biden’s Executive Order on AI—set a threshold in terms of computational power used to train an AI model, above which stricter rules apply. Epoch AI’s database of AI models has been an invaluable resource for policymakers making such endeavors, says Lennart Heim, a researcher at nonprofit think tank the RAND Corporation, who was a member of Epoch AI’s founding group and is still affiliated with the organization. “I think then it’s fair to say there is no other database in the world which is that exhaustive and rigorous.”
Researchers at Epoch AI now aim to go one step further by using their research on historical trends to inform predictions of AI’s future impacts. For example, in a paper published in November 2022, Epoch AI analyzed how the amount of data being fed into AI models was increasing with time and estimated how much useful data is readily available to AI developers. The researchers then pointed out that AI developers might soon run out of data unless they came up with new ways of feeding their creations. Another study attempts to predict when AI systems that would, if widely available, result in societal changes comparable in magnitude to the industrial revolution—their model estimated that such an outcome is 50% likely to occur by 2033. This is just one model—Sevilla emphasizes that Epoch AI team members’ personal predictions for such an event range from a year from now to a century away.
Read More: When Might AI Outsmart Us? It Depends Who You Ask
Such uncertainty underscores that despite Epoch AI’s efforts to bring rigor to the issue, a huge amount of unpredictability remains around how AI will impact society. Sevilla hopes that his organization will catalyze a broader effort to tackle the issue. “We want to motivate everyone to think more rigorously about AI—to take seriously the possibility that this technology might bring about drastic changes in the coming decades,” he says, “and to try to rely on actual evidence or good scientific thinking when making decisions around the technology.”