AI

How AI became generative


Listen to this podcast
Enjoy more audio and podcasts on iOS or Android.

Your browser does not support the <audio> element.

WHAT MADE AI models generative? In 2022, it seemed as though the much-anticipated AI revolution had finally arrived. Large language models swept the globe, and deepfakes were becoming ever more pervasive. Underneath it all were old algorithms that had been taught some new tricks. Suddenly, artificial intelligence seemed to have the skill of creativity.  Generative AI had arrived and promised to transform…everything.

This is the final episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?

Host: Alok Jha, The Economist’s science and technology editor. Contributors: Lindsay Bartholomew of the MIT Museum; Yoshua Bengio of the University of Montréal; Fei-Fei Li of Stanford University; Robert Ajemian and Greta Tuckute of MIT; Kyle Mahowald of the University of Texas at Austin; Daniel Glaser of London’s Institute of Philosophy; Abby Bertics, The Economist’s science correspondent. Runtime: 49 mins

Listen on: Apple Podcasts | Spotify

On Thursday April 4th, we’re hosting a live event where we’ll answer as many of your questions on AI as possible, following this Babbage series. If you’re a subscriber, you can submit your question and find out more at economist.com/aievent.

Podcast transcripts are available upon request at [email protected]. We are committed to improving accessibility even further and are exploring new ways to expand our podcast-transcript offering.



Source

Related Articles

Back to top button