News stories created through artificial intelligence? We posted one and nobody noticed. – Wirepoints
By: Mark Glennon*
For grins and a test, yesterday we posted an article initiated through artificial intelligence (AI) headlined, “New ordinance regulating vacant storefront across Chicago gets City Council approval.” Nothing is wrong with it. It’s just-the-facts and contains nothing judgmental. We checked it out ourselves and it’s mostly a re-write of the City of Chicago’s press release about the ordinance. The article is from an outfit we are not familiar with, Chicago Star, and it says it was edited by a human. Nobody appears to have noticed.
We call it out because it’s a small waypost on what’s likely to be AI’s impact on journalism — for better or worse.
I’ve been experimenting with a few AI tools and its easy revolutionary potential is obvious. For sure, I would not today rely solely on an AI answer or a news story that’s purely AI. It does, however, generate ideas and identifies sources you may not see in a standard Google search. Search engines like Google are fundamentally limited by their nature as mere links, and Google is known to rig its search results to promote political viewpoints it likes. AI adds context and interpretation, which is often interesting but not always right.
Humility, it appears, isn’t beyond AI. I asked Perplexity, one of many AI platforms, “Should news stories be written by artificial intelligence? No, it said, in this answer, which is pretty darn good:
No, news stories should not be primarily written by artificial intelligence (AI) systems. While AI can assist journalists in certain tasks, relying on AI to autonomously generate news articles raises several significant concerns. Accuracy and Fact-Checking: AI language models can hallucinate or generate factually incorrect information, especially on complex and rapidly evolving news topics. Without human oversight and rigorous fact-checking, AI-written news risks spreading misinformation and propaganda. Lack of Journalistic Integrity: Journalism involves more than just compiling information – it requires ethical standards, investigative skills, critical thinking, and the ability to provide context and analysis. AI systems lack the judgment and principles that define quality journalism. Source Protection: AI systems could inadvertently reveal confidential sources or sensitive information that human journalists carefully protect. Bias and Manipulation: AI models can inherit and amplify biases present in their training data. There are also risks of AI systems being manipulated to generate misleading or one-sided narratives. Copyright Issues: Most news organizations require human authorship to claim copyright over content, limiting the legal use of fully AI-generated articles. Public Trust: There are already concerns about declining public trust in media. Widespread use of AI-written news could further erode credibility and public faith in journalism.While AI can be a useful tool for research, data analysis, and potentially augmenting human journalists, the core tasks of reporting, writing, and editorial judgment should remain firmly in human hands to uphold journalistic ethics, accuracy, and public trust.
As an example of it’s limitations, I also asked it, “What does Chicago’s new ordinance say about empty storefronts?” Its answer was very accurate, but as one of its sources it cited that AI-generated article by Chicago Star. AI relying on other AI sources is obviously a problem.
Still, it’s easy to see how powerful AI could become. Struggling, short-staffed media outlets might well start publishing stories primarily written by AI, contrary to Perplexity’s advice. On the other hand, with AI seemingly improving each month, more reliance on it may become more defensible. It’s also easy to see how AI could replace traditional search engines, which undoubtedly are the primary means reporters find background on stories they cover.
Most importantly, AI probably will always be at risk of producing biased results written into its code, whether deliberately or not. Users of most AI tools can also choose to get biased results based on what question they ask or directions they give it. Countless articles on those and other risks are easy to find, which I won’t try to summarize here.
We won’t be linking to anything we know to be generated by AI unless we check it for accuracy ourselves, but brace for a big jump into a new era in journalism.
–Mark Glennon is founder of Wirepoints.