For the first time, two Pulitzer winners disclosed using AI in their reporting
This past Monday, 15 works were awarded the Pulitzer Prize for journalism. In a first for the prestigious awards, two of those winners disclosed using AI to produce their stories.
“We are not aware of precedents offhand,” said Marjorie Miller, administrator of the Pulitzer Prizes. “Previous data discerns winners may have used low-level machine learning applications. This is the first time we’ve explicitly asked the question.”
In March, Alex Perry reported for Nieman Lab that five of the 45 finalists this year had disclosed using AI while researching, reporting, or drafting their submitted stories. While hype and fear cycles around generative AI play out in newsrooms across the country, it was actually machine learning, used for investigative reporting, that ended up most represented among those finalists.
Local reporting prize winner “Missing in Chicago,” from City Bureau and Invisible Institute, trained a custom machine learning tool to comb through thousands of police misconduct files. The New York Times visual investigations desk trained a model to identify 2,000-pound bomb craters in areas marked as safe for civilians in Gaza. That story was one of several that won as part of the paper’s international reporting prize package.
Miller also confirmed the three other finalists who disclosed using AI. They included a local news series on the government response to Hurricane Ian by The Villages Daily Sun — the newspaper that covers a large Florida retirement community — as well as Bloomberg’s investigation into how the U.S. government fuels the global spread of gun violence and its explanatory reporting on the water profiteering industry.
I spoke to journalists behind the two Pulitzer-winning stories that used AI about how they brought machine learning into their investigations, and what other newsrooms can learn from their work.
Data journalism grounded in community
This year’s winner of the Pulitzer for local reporting was “Missing in Chicago,” a series that exposed systemic failures in the Chicago Police Department’s (CPD) handling of investigations into missing and murdered black women. Published by Chicago-based nonprofit outlets City Bureau and Invisible Institute, the series has been years in the making, and one pillar of its reporting was a machine learning tool named Judy.
“We used machine learning to parse through the text in police misconduct records, specifically the document types that had narratives living within them,” said Trina Reynolds-Tyler, the data director at the Invisible Institute, who shared the Pulitzer with City Bureau reporter Sarah Conway.
Reynolds-Tyler began building Judy back in 2021, as part of an Invisible Institute project to process thousands of CPD misconduct files that were released by a court order. The files spanned from 2011 to 2015. She brought Chicago community members into Judy’s development process and eventually 200 volunteer workers read and manually labeled the misconduct files. In short, these volunteers created Judy’s training data.
While they may not have been experts in AI, Reynolds-Tyler believed that people from this impacted community had an inherent understanding of the CPD data. Even if they didn’t come in with the language to describe a machine learning algorithm, they had lived experience that an outsourced data labeler could not. Ultimately, Judy was able to pull out 54 allegations of police misconduct related to missing persons in that four-year window.
For their Pulitzer-winning investigation, those 54 cases became a roadmap of sorts for Reynolds-Tyler and Conway’s other reporting. Themes in those 54 cases validated the pain and neglect of families who had loved ones go missing in more recent years. It proved those cases were not isolated, but part of a history of systemic failure by the CPD.
Reynolds-Tyler hopes other reporters who rely on machine learning tools understand the value of embedding in the community they are reporting on, and grounding their data work in real places and real people. “We must make it our business to bring people to the future with us,” Reynolds-Tyler said of AI adoption in investigative reporting. “They can help you look where you need to look, and they can help you in your search for understanding.”
Finding the pattern
In the international reporting category, a December 2023 report by The New York Times visual investigations desk was one of several stories recognized about the war in Gaza. The Pulitzer-winning team trained a tool that could identify the craters left behind by 2,000-pound bombs, one of the largest in Israel’s weapons arsenal. The Times used the tool to review satellite imagery and confirm hundreds of those bombs were dropped by the Israeli military in southern Gaza, particularly in areas that had been marked as safe for civilians.
“There are many AI tools that are fundamentally just powerful pattern-recognizers,” said Ishaan Jhaveri, a reporter on the team who specializes in computational reporting methods. He explained that if you need to comb through a mountain of material for an investigative project, you can train an AI algorithm to know what pattern you’re looking for. That might be the sound of someone’s voice in hours of audio recordings, a specific scenario described in a pile of OSHA violation reports or, in this case, the outline of a crater in aerial photos.
Jhaveri said the team decided an object detection algorithm was best suited for their investigation. They turned to a third-party platform called Picterra to train that algorithm. It allowed them to manually select craters in satellite images uploaded to the platform, slowly training Picterra to do the same automatically, at scale.
One of the advantages of turning to Picterra was its sheer computer power. Satellite images can easily surpass several hundred megabytes or even a couple gigabytes, according to Jhaveri. “Any local development work on satellite images would naturally be clunky and time-consuming,” he said, suggesting many newsrooms simply don’t have the infrastructure. A platform like Picterra takes care of the processing power for you.
After weeding out false positives (e.g. shadows, bodies of water) the visual investigations team ultimately found that by November 17, 2023 there were over 200 craters matching this type of bomb in southern Gaza — posing “a pervasive threat to civilians seeking safety across South Gaza,” the Times said in its investigation, noting, “It’s likely that more of these bombs were used than what was captured in our reporting.”
“We didn’t use AI to replace what would’ve otherwise been done manually. We used AI precisely because it was the type of task that would’ve taken so long to do manually that [it would distract from] other investigative work,” Jhaveri said. As he put it, AI can help investigative reporters find the needle in the haystack.