How generative AI can help local newspapers survive
Some people see AI as a threat to quality journalism. In reality, the opposite is true. If implemented in the right way, AI can actually help newspapers to survive at a time when newsrooms are shrinking and resources are scarce (this is especially true for smaller publications).
Some editors are starting to experiment with GPT, while others want little to do with the new technology; for them, it’s a threat rather than an opportunity. Many media companies have set up AI working groups, and some have already embraced AI and are integrating it into their business. A few are even developing their own AI programs.
It’s clear, then, that newspapers have different attitudes towards AI, and a lot of questions surrounding the technology have yet to be addressed. There is clearly a need for action, but what is the best way forward?
Given the speed at which the AI technology is evolving, it’s no wonder that editors feel under pressure to react to the new reality. The faster the technology advances and the slower publishers react to it, the more vulnerable the industry will be to transformation as well as misinformation and deepfakes.
Not surprisingly, bigger media companies and publishers have generally led the way when it comes to devising AI strategies and, in some cases, already implementing them. That has made small and medium-sized publishers more vulnerable to the damage that AI could inflict on journalism. That’s a pity, given that AI has the potential to actually help media organisations.
Why AI could help local newspapers thrive – if they act now
That is particularly the case for smaller publishers and newspapers with limited resources and shrinking newsrooms. In fact, it’s no exaggeration to say that, if implemented in the right way, gen AI could end up saving regional and local journalism.
Content from our partners
It’s essential that media organisations focus on providing high-quality journalism, while ensuring that they retain full control over the data they use. This will be a crucial part of the fight against misinformation and also bias in the data AI models are trained on.
At the same time, technological developments are moving forward at an extreme pace, making any hesitation on the publisher’s side even more problematic. The faster the technology advances and the slower publishers react to it, the more room it will leave for dangerous things like misinformation and deepfakes.
It is super important to pay attention to ensuring a high quality of any content generated by AI. In addition, full control over the data that is used will become more important.
A way to get there: Publishers should definitely implement their own customised large language models (LLMs) instead of just using general models without any fine-tuning. Just building your strategy on OpenAI and GPT will not be enough in the future.
If they use customised LLMs, especially combined with an open source foundation model, they can fine tune them on their specific needs, guaranteeing a high quality output that is needed in journalism. Also they keep full control and sovereignty over the data they use and their AI tools are trained on.
Publishers should also be doing a few other specific things right now. As we all know, AI has already had a significant impact on the news industry, and there’s much more to come. We need to stop discussing AI in working groups and experimenting with the new technology. Instead, all publishers need to start adapting to this radical transformation immediately.
There’s no need for smaller publishers to build their own tech teams and invest heavily in AI. However, they should at least begin to learn how to incorporate AI into their own workflows. If they fail to react quickly enough, the technological adaption curve will get even steeper.
Publishers don’t just need to identify the practical benefits of AI; they must also learn first-hand what its limits are. In an era of unprecedented amounts of content, all publishers will at some point need to answer a fundamental question: what types of content should be produced by AI, and what should remain in human hands?
Another key challenge will be to encourage/persuade editorial teams to embrace the new technology. There is one big misunderstanding: if used correctly, AI can actually help editors rather than replacing them. For example, it can take care of tedious, repetitive tasks, allowing journalists to spend more time generating stories.
In shrinking newsrooms, many editors are having to take on extra duties such as creating social media posts. This leaves less time to research, write, and even break stories, let alone embark on lengthy investigative projects.
In short, gen AI can – if implemented in the right way – free up journalists to again focus on what they do best, what they actually became journalists for, rather than getting bogged down with other tasks.
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our “Letters Page” blog