Generative AI

“We have to educate ourselves about AI and then report the hell out of it!”


  • We talk to Jane Barrett, Global Editor, Media News Strategy, Reuters, as part of our series of interviews with contributors to our forthcoming news report – Trusted Journalism in the Age of Generative AI 
  • Lead author and interviewer is Dr Alexandra Borchardt 
  • The EBU News Report 2024 will be available to download from June  

In which ways Is generative AI a game-changer for journalism?

It’s a massive gamechanger for all industries. It is an entirely new way of human beings interacting with data and information, and we are right at the beginning of the journey. Anyone who predicts today what it is going to look like in five years is probably a fool.

How could journalism benefit in the short term?

I see things in three buckets: reduce, augment, transform. First, how can we use gen AI to reduce our journalists’ workloads? What repetitive jobs can AI help us do? We have started experimenting on these quickly. We have prompted GPT 4 to help us do a first edit on a story, extract facts from statements, brainstorm headlines, translate stories better. Replacing routine tasks at scale may take more development work but already AI can speed us up and help us do more with the resources we have. The second opportunity is augmentation. For instance, we can take the reporting we have now and make it available to more people in the way that they want it. AI skills might help us re-version a story into social posts, a video script, a quick summary for busy readers, a translation. Or AI could augment our work by helping find stories in data dumps or write explainers from our archive.

Number three, transformation, sounds like digital transformation all over again. 

It really is. As an industry, we can learn lessons from the past waves of digital transformation and be ready to move more nimbly this time round. Internally, how might we re-think the value of each part of our workflow because of what AI can do – for us, our clients, and audiences? Externally, how will the whole information ecosystem change? Will audiences’ expectations and behaviour change again because of how AI shapes the rest of their lives? What does that mean for our business models?

At the same time, we have to tread very carefully because as journalists we deal in facts and generative AI models are prone to hallucination. I liken today’s gen AI models to a Formula One car. However well you drive, you need to train to get behind the wheel of an F1 Ferrari and not crash. And you need a team of excellent technologists, and in AI, data scientists around you to get to where you want to go safely. It’s not a silver bullet or a quick solution to our problems. 

What is Reuters using generative AI for already?

I mentioned some of our experiments earlier. We are now building out some of those, testing and integrating them into our editorial tools. We are also training our staff on prompts and have built a prompt builder to help them do that. That has yielded some good successes, for instance in doing a first copy edit or summarising stories into background paragraphs.

Also, we have a tool in Reuters Connect which provides video transcription, translation, shot-listing and facial recognition. It carries a clear disclaimer that the work has been done by AI and makes our content easier for clients to use. 

Are you delighted or worried about generative AI for your company and in general? 

I’m generally excited but I’d be lying if I didn’t admit some concern. This is another huge disruption for the news industry after the explosion of the internet and mobile, search and social. How does it affect our business this time? I also worry about society, given the lack of trust in journalism and even facts. Just as AI can improve efficiency at Reuters, it can also allow bad actors to create convincing misinformation at scale, either misleading people or just confusing everybody as to what is true. 

What is the media’s role in this?

We have to educate ourselves about AI, and then report the hell out of it! That is the one tool we have that nobody else does – the power of reporting. Gen AI is going to be one of the seminal changes of our lives and we need to turn all our investigative and analytical power on to it to tell the story, hold these new AI powers to account, and inform people about how the tools work.

What kind of mindset and behaviour do you encourage in the newsroom and your company?

My big word is play. We have a private version of ChatGPT so it’s a safe playground. Come in and have a go. Do some training, see what is possible. Keep your mind open, share what you found. We have a great cohort of early adopters and others who are keen to get going. Of course, there is always fear about what change will mean for our jobs but again, we just don’t know yet. The important thing is to get involved. As our CEO says: generative AI won’t take your job, but someone who knows how to use it will.

What is the biggest challenge in managing AI in your organization? 

The biggest challenge for me right now is prioritization. What do we take from the experimental phase into production. Our newsroom has come up with so many great ideas. But it takes a lot of work to take something from a basic prompt, test it, integrate it into the workflow. Even more if you are fine-tuning a model or building more complex systems.

Have you made mistakes with AI strategy?

In any change you have to communicate, communicate, communicate. Particularly with something so new and powerful, we can’t speak to and listen to our teams too much. Getting that right is critical.

Do you have AI guidelines – and what’s special about them?

We have four basic guidelines: First, it’s a great opportunity for our journalists and journalism. Second, Reuters is always responsible for our output, whether or not gen AI was used in its production. Third, we will be transparent about where we’ve used gen AI. Finally, we will be increasingly sceptical given the rise of synthetic media. We said we will tweak the guidelines as new insights emerge. For instance, we are now fleshing out what human oversight of AI means in practice. 

Do you think journalism will develop from being a push activity with news directed to the audience to a pull activity when people will demand customized news that fit their needs?

I suspect so, for two reasons, First, it has always been a pull activity. Nobody reads newspapers from cover to cover. You choose what to read. Second, it has already changed with search. Search answers, now, are a long list of links. The generative search experience feels like a natural next step. How we watch TV and use audio, the whole world has become much more of a pull world. 

Some of the dynamics are beyond the influence of the media industry. In which ways do you think AI should be regulated?

I find it useful to have cross-industry conversations. Gen AI is impacting every business: medicine, law, logistics, finance. Journalism is not exceptional. There are already regulations around data and privacy, copyright and the like. So it will be interesting to see how those develop in the world of AI to start with as well as some of the newer conversations about responsible tech.

There is a huge AI hype going on in the media industry. What is missing from current conversations?

I suspect we’re not looking hard enough at the transform bucket. The natural tendency is to want to solve today’s problems and it is hard to imagine tomorrow. We need to get out of our bubbles and see the possible. How do high school students or first year students at universities use these tools and interact with information? How is that going to change things? We need to get a good balance between solving today’s problems and preparing for tomorrow’s world. 
 





Source

Related Articles

Back to top button