Generative AI

One-third of bosses shelve generative AI plans after Glasgow Wonka scandal


One-third of UK business leaders have abandoned plans to use generative AI following the Glasgow Willy Wonka Experience Scandal.

The catastrophic chocolate-themed event made headlines around the world after organisers used AI tools to create elaborate and misleading images to encourage ticket sales, as well as garbled AI scripts for actors appearing at the show.

The findings were revealed in a detailed survey commissioned by the Parliament Street think tank into the impact AI is having on consumer trust.

The study, conducted by independent polling agency Censuswide, quizzed 500 UK CEOs about their use of AI, including security concerns and plans for its implementation across their organisation.

Just over half of those polled (51%) said they were planning to introduce a formal code of conduct for AI use to ensure that staff use the technology responsibly.  A further 50% were planning to send staff on an AI awareness course to increase awareness of its risk.

A quarter of bosses said they had disciplined a member of staff for AI misuse this year. The same number have formally banned AI use in the workplace.

When asked about their concerns about the technology, one-third said they were worried it could be used to mislead customers. Meanwhile, one-third said a lack of AI regulation had made them worried about data privacy threats.

Simon Ward, CEO of digital specialists Inspired Thinking Group said: “The Wonka incident should serve as a wakeup call about the risks of using AI recklessly, leaving customers with a bitter after taste. That’s why it’s vital that organisations offer high quality training for staff as well as putting safeguards in place to ensure the technology is being used to give an accurate representation instead of creating a world of pure imagination.”

Derek Mackenzie, CEO of tech recruitment provider Investigo said: “AI is a very powerful technology, but bosses shouldn’t be afraid of using it. In many cases, problems occur because organisations struggle to understand how best to use the technology as well as having skillsets to deploy it correctly. That’s why it’s crucial to have a clear code of conduct in place as well as equipping workers with skills they need to operate AI tools responsibly.”



Source

Related Articles

Back to top button