Generative AI

Newsrooms Are Already Using AI, But Ethical Considerations Are Uneven, AP Finds


Almost three quarters of people working in the news industry have used generative artificial intelligence (AI) in some professional capacity, according to a study by the Associated Press (AP). The most common use of AI was for the production of content, with nearly 70% or respondents saying they had used it for this purpose. The next most popular uses were information gathering, multimedia content production and business tasks. Despite its popularity, the AP found that people working in the news industry have significant concerns about generative AI and their employers have implemented precautions unevenly.

The AP’s Generative AI in Journalism study surveyed professionals in the news industry about how they are using AI in their work and what the practical and ethical implications are for these technologies. The AP received 292 responses to its survey, which was conducted at the end of 2023, representing a range of roles in the industry.

The study found that news industry use of AI is common today, but many people aspire to use it more “if it were capable of producing quality results.” That’s the language that was used in the survey and it’s the question at the heart of the news industry’s mixed feelings about the technology.

Generative AI works by analyzing patterns in large data sets and learning to generate new data with the patterns it observes in its source materials. In simpler terms, it learns from content (like text, images or videos) that humans created and then it makes new content that looks almost human-made. The fundamental problem with this process is that it is opaque and its inputs unknowable.

Some of the tasks that AI can perform for newsrooms offer extraordinary benefits with low risks to information accuracy. These include: transcribing long conversations, summarizing data and writing headlines. Other tasks that newsrooms are delegating to AI raise serious concerns, however. Information gathering, for instance, is a known weakness of chatbots, which commonly “hallucinate” falsehoods and present them as fact. Ultimately, only humans can go to the source and observe real world events. There is no digital replacement for boots-on-the-ground reporting.

In a white paper for the Tow Center for Digital Journalism at Columbia, Felix Simon suggested that this is a pivotal moment for journalism, but that AI need not be an existential threat if we are intentional about its implementation.

Simon wrote that AI “mostly constitutes a retooling of the news rather than a fundamental change in the needs and motives of news organizations. It does not impact the fundamental need to access and gather information, to process it into “news,” to reach existing and new audiences, and to make money.”

The artificial intelligence we use today has been around since roughly 2018, but it didn’t become broadly known and accessible to the public until late 2022 with the release of ChatGPT, an interface from OpenAI that responded to prompts from users. Since then, Google, Meta, Microsoft and others have released their own generative AI products and many more began finding ways to integrate the technology into existing products. From the beginning, generative AI has demonstrated unreliable accuracy and a challenge to sourcing and attribution.

The AP found that people working in the news industry were aware of these concerns, but their organizations took uneven approaches to addressing them.

According to Poynter’s Kelly McBride, that’s a problem.

“Every single newsroom needs to adopt an ethics policy to guide the use of generative artificial intelligence,’ McBride wrote in Poynter. “Because the only way to create ethical standards in an unlicensed profession is to do it shop by shop.”

Most large news organizations have now established their own set of principles for using AI, but that only scratches the surface of news sources on the internet. Poynter now provides a template for news outlets that want to get started. Another resource is The Paris Charter, which was initiated by Reporters Without Borders and reflects the contributions of 32 media specialists from 20 different countries.

The pitfalls of moving too quickly and recklessly with AI aren’t hard to spot. Microsoft was one of the earliest adopters to hand the reigns of news production from humans to this technology and, as CCN has reported, the results have been disastrous. With the help of AI, Microsoft has published false stories, posted offensive headlines and stoked election misinformation conspiracies.

As conflict roils several regions of the world today and dozens of consequential elections are scheduled to take place in the months ahead, the stakes for newsrooms will remain high.

“The disruptive power of artificial intelligence (AI) will sweep through the information space this year at a time of intense political and economic volatility around the world,” wrote Nic Newman for the Reuters Institute for the Study of Journalism annual trends report. “Embracing the best of AI while managing its risks will be the underlying narrative of the year ahead.”



Source

Related Articles

Back to top button