Newsweek is making generative AI a fixture in its newsroom
If you scroll down to the end of almost any article on Newsweek.com right now — past the headline, the article copy, several programmatic ads, and the author bio — you’ll find a short note. “To read how Newsweek uses AI, click here,” reads the text box. The link leads to Newsweek’s editorial standards page, where several paragraphs now outline how generative AI tools are being folded into the publication’s editorial process.
The disclosure is just one signal of a larger experiment with AI-assisted editorial work happening right now at the 90-year-old brand.
Newsweek first announced changes to its AI policy in September 2023, just as heated debates over early AI adoption in journalism began to boil over. Sports Illustrated and Gizmodo were among several publications criticized late last year for their shoddy use of generative AI tools to write articles. Publications, like Wired, responded by largely denouncing tools like ChatGPT in editorial work, promising to never publish text written or edited by AI.
Newsweek, meanwhile, has joined competitors like Business Insider in taking a relatively bullish view on the technology. “Newsweek believes that AI tools can help journalists work faster, smarter and more creatively,” reads the updated standards page. “We firmly believe that soon all journalists will be working with AI in some form and we want our newsroom to embrace these technologies as quickly as is possible in an ethical way.”
Six months into this new policy, staff writers and editors have not been required to use AI, but they are being encouraged to experiment with it to boost speed and efficiency. Newsweek has also rolled out a custom-built AI video production tool and is currently on a hiring spree for a new AI-focused Live News desk to cover breaking stories.
“I think that the difference between newsrooms that embrace AI and newsrooms that shun AI is really going to prove itself over the next several months and years,” Jennifer H. Cunningham, the new executive editor of Newsweek, told me. Cunningham joined the publication in March, following her run as editor-in-chief of Insider’s news division. “We have really embraced AI as an opportunity, and not some sort of bogeyman that’s lurking in the newsroom.”
Newsweek has been working to rebuild its editorial credibility following a string of legal issues and allegations of unethical journalism in recent years. In early 2018, Newsweek’s offices were raided by the Manhattan district attorney as part of an investigation into its former owner, Etienne Uzac. (Uzac later pled guilty on fraud and money-laundering charges involving Newsweek’s parent company at the time, IBT Media, and at the end of 2018, Newsweek broke off from IBT Media to form an independent company.) More recently, Newsweek has been criticized for platforming far-right figures on its opinion vertical and podcasts.
Against this backdrop, Newsweek’s CEO Dev Pragad has continued to grow the publication’s digital presence and shift its revenue streams away from its marquee print magazine. So far in 2024, Newsweek.com has seen more than 130 million total sessions every month and over 50 million monthly unique visitors, according to the publication’s advertiser disclosure figures and media kit. The site has a clear focus on quick-turnaround news, aggregation, and op-eds. In one 2019 editorial staff email, editor-in-chief Nancy Cooper summarized the current Newsweek story strategy succinctly: “We don’t want fewer stories or slower stories, just to make every story we do better.”
In service of more stories, and faster stories, Newsweek’s AI policy addendum gives staffers the greenlight to use generative AI in “writing, research, editing, and other core journalism functions,” as long as journalists are involved in each step of the process. The original version of Newsweek’s policy stated that using AI for “core journalism functions” required touch points with “three or more journalists” during production. A review of the page on the Wayback Machine shows that line was removed sometime after March 2.
There are fewer restrictions on reporters who use AI for other tasks, according to the standards page, including “note taking, transcription, video script writing, writing social copy, A/B testing headlines, adding metadata or selecting images.”
“We would really like reporters to be able to devote their workday to journalistic output and not the parts of journalism that are a little bit more process-oriented, kind of ticking the boxes, if you will,” said Cunningham.
Newsweek provided one example of a story published under this new AI policy. The short news piece by Taiwan-based staff writer Aadil Brar details Sri Lanka’s decision to ban a Chinese research vessel from docking in its port, and was researched with the help of ChatGPT Plus. Cunningham also pointed to legal reporting as a specific beat primed for generative AI support at Newsweek. She said she’d encourage a reporter to use a tool like ChatGPT to summarize “verbose or complex or lengthy court decisions” to help turn out news coverage.
Despite “writing” being among the approved use cases, Cunningham claimed in our interview that no published Newsweek stories have been written with AI tools. On a follow-up call, Newsweek spokesperson Ben Billingsley clarified that text generators have been used as an assistive tool to draft stories in some cases. Billingsley said 5% of the stories on Newsweek.com use AI tools for drafting, and that the most accepted use case internally is for staffers “who speak English as a second language.”
Alongside standard professional subscriptions to the chatbots Microsoft Copilot and ChatGPT, and the image generator Dall-E, Newsweek is also building proprietary AI tools in-house, according to Newsweek’s new chief product officer Bharat Krish.
Krish — who previously served as chief technology officer at Time and co-founded the facial recognition startup Refine AI — said there is currently no permanent AI product strategy at Newsweek. Instead, he describes the approach as a set of experiments.
“If I tell you something that’s going to happen tomorrow, a week later, that technology could have evolved,” Krish said. “It’s moving so fast. We’re trying to keep up with it.”
One of those experiments is already live. Over the last several months, Newsweek’s engineering team launched a custom tool that produces short-form video summaries of existing news articles. The internal tool first creates a video script summarizing the news story, then finds and pulls relevant stock images and video clips, stitching them together into one cohesive video. Krish says the proprietary technology was built using a combination of open-source tools and GPT-4 Turbo, the professional-tier large multimodal model released by OpenAI in December 2023.
Although much of the process is automated, the tool still requires a human video editor to review the transcript, manually select additional images, and tweak the final edit.
The clips, often about 30 to 90 seconds long, are featured at the top of many news stories on the site. As an example, a Newsweek spokesperson shared the video from this recent story on April 9, which summarizes the findings of a research study on globalization out of the University of Chicago. The summary text is overlaid onto stock videos of crowds from different parts of the world.
Krish describes the tool as a way to get further mileage with stories, repackaging reporting that already exists on the site. “We have a small multimedia team. They cannot handle the volume of requests from all this content that needs video, and so this tool helps augment,” he said.
Building an AI-powered breaking news desk
Newsweek’s jobs board currently includes a host of open positions across editorial, audience, and sales. Some job listings are for reporters and editors who will be mandated to integrate AI into their workflow as part of a new Live News desk that prioritizes speed and efficiency.
“This team will leverage the opportunities presented by advancements in artificial intelligence and other tools to produce journalism to a high standard,” a job description for Reporter (Live News) reads.
Reporters on the new Live News team will be responsible for “producing multiple stories a day across several beats and topics” — which may include U.S. politics, crime, celebrities, reality TV, and international current affairs — using AI tools, according to the listing. In an interview, Cunningham emphasized that AI adoption on the Live News team, whether for research or editing, will be in service of “quick turn types of reporting.”
Job postings for Live News writers and editors are based in Newsweek’s New York and London offices and make explicit that a “working knowledge” of large language models (LLMs) and image generators are among the criteria for hiring. Alongside more run-of-the-mill editorial and management responsibilities, the lead editor for the Live News desk is also expected to “have a comprehensive understanding of media law.”
In any workplace, a wave of AI adoption often goes hand-in-hand with anxieties about job displacement. Cunningham claimed, however, that existing Newsweek roles will not be affected by these AI initiatives.
“Listen, I totally understand and commiserate with reporters, possibly feeling anxious, especially because of the headwinds generally — that have nothing to do with AI — that have impacted newsrooms around the country. In the last six months, a lot of shops have closed, there have been layoffs,” she said. Cunningham noted that unlike many news publications, Newsweek is expanding. “At a time when many other news entities are shuttering, or are freezing hiring — we’re hiring.”
When AI tools go wrong
The Live News editor, in addition to staying up-to-date on media law, will be expected to ensure all stories published from the desk are “accurate and fair.” It’s not a straightforward task. Many AI chatbots and text generators — including ChatGPT, Microsoft Copilot, and Google’s Gemini — have been widely criticized for factual inaccuracies.
Cunningham says the solution to these shortcomings is transparency. “We want to make sure that the reader that we’re serving is very clear about the use of this technology in the content he or she is viewing or reading. It’s really important. I think also ethically, it’s important,” she said.
That said, readers likely won’t be able to discern whether an error in a Newsweek story originated from a chatbot, or other AI tool. Currently, Newsweek only plans to include the general disclaimer of its AI editorial policies on article pages. The publication is not marking whether individual stories were produced with assistive AI tools or disclosing in corrections whether an AI tool contributed to the error.
Newsweek’s corrections page lists every story on the website that required an amended factual error or other significant update. All corrections are organized by month. In September, when Newsweek first changed its generative AI policy, the number of corrections across the site totaled 21. Every month since then, the number of total corrections increased by a handful. In March, for example, Newsweek issued 54 corrections across its site. Under the current policy, it’s not possible for readers to know whether AI tools contributed to any of the errors in these stories.
When asked how Newsweek has handled corrections due to AI-assisted stories, Cunningham maintained there have been no errors introduced into a published Newsweek story by generative AI tools.
“We will continue to be transparent, and take accountability for any errors that occur, whether they’re human error or AI error, but fortunately, that hasn’t been something that we’ve had to deal with yet,” she said. “I think it’s clear to the reader that we’re utilizing AI and that we’re being open and honest about our use of AI.”
For now, Newsweek is describing its AI strategy as flexible. “We’re in the infancy of this,” said Krish, who shared that his engineering teams are working towards releasing other AI-backed editorial projects. “Where the future goes, we don’t know, but we’re at least building up the skill set in-house.”