AI

Three Lessons Learned From The Second AI Winter


Looking back at the history of artificial intelligence over the last 50 years or so, one thing that memory teaches us is that there have been certain boom and bust cycles of AI growth and focus.

Take the second ‘AI winter’ that happened roughly between 1987 and 1994.

These years saw a slackened interest in this type of technology, and less of the innovation that we now see happening with rapidly strengthening artificial intelligence and machine learning programs.

But why did the AI winter happen?

In conferences and in panels, in research groups and in our classes, as we look at that history, we see some reasons why things unfolded like they did – and some ideas to guide us as we continue…

Growth Requires Buy-In

Essentially, in the second AI winter, people stopped funding projects due to fears about lack of investment.

That’s kind of a vicious cycle – if enough people are saying that the money is not going to be there, the end result is, often, that the money doesn’t end up being there.

Some cite John McCarthy, an early proponent of AI, as turning sort of sour on market potential during those years.

“McCarthy criticized expert systems (AI designs of the day) because they lacked common sense and knowledge about their own limitations,” writes Sebastian Schuchmann at Towards Data Science.

In any case, there’s broad consensus that the AI winter happened. AI Newsletter describes it this way:

“The term itself, coined in the late ’80s, served as a cautionary tale about the cyclical nature of technological advancement and disappointment in the realm of artificial intelligence. The ‘Winter’ metaphorically froze the ambitions and progress of AI, leading to the extinction of AI companies and a considerable downturn in research and investment.”

Eventually, AI started to pick up again when the technology had matured to the point that people started seeing its fruits more transparently. We’ll talk about that a little bit more later.

Prove Utility

You can also see that during those years, from ‘87 to ‘94, you have a few smaller inventions that don’t seem as groundbreaking as those that came before that. For instance, the backgammon player unveiled in 1992 by IBM didn’t seem to pack the same punch as Arthur Samuel’s checkers player (that we covered in a different post) which wowed the crowds in the 50s. Likewise the Jabberwocky “amusing chatbot” from Rollo Carpenter has sort of ended up in the dustbin of history, not entirely due to its limited scope, but also just because of the timing of the thing.

On the flipside, you could say that during the second AI winter, another type of technology was showing its value everywhere

The Internet was taking off: you have Tim Berners-Lee talking about hyperlinks and hypertext as early as 1989, and the first world wide web browser created in 1990.

That starts to explain another part of why AI wasn’t on the front burner at the time – people were getting blown away by the communicative power of the Internet, and that’s where a lot of the money went.

You could tie this into the dot.com bubble that broke around the turn of the new millennium, but we also have to consider what was happening during the big data age, right around that time.

Neural Networks Are Different Animals

If there is a fundamental lesson that we’ve learned in terms of emerging AI technology, it’s that the big data era changed the game and ushered in the potential of new artificial intelligence systems based on radically different engines…

Most of us didn’t start hearing about neural networks until a few years after the world celebrated the year 2000.

Prior to that, we were hearing about how companies were harvesting massive amounts of data, combing through them with algorithms and gleaning the results.

But a few more years down the road, you started to hear about everything including weighted inputs, activation functions, and hidden layers of neural network models.

All of it started to become actively applied to all sorts of technologies, and then we started getting chatGPT and Dall-E and everything else – in short order!

What we learned is that prior to the 21st century, we hadn’t yet discovered the technology that would truly run intelligent computer systems in our world.

In other words, during the second AI winter, part of the criticism was that the technologies themselves just weren’t sophisticated enough. They could do extremely complicated rule-following, but they were still deterministic. They didn’t really train on data the way that neural network systems do.

In fact, you could also argue that it’s been a real task ushering in public awareness around this technology, because so many people still don’t understand what it’s supposed to do and how it does it!

But you could say that the takeaway here is that part of the blame for the second AI winter was in the simple reality that we had not yet pioneered the systems that would make these new kinds of AI feasible. We were really just playing around the edges…

Stay tuned for more, including news around our big conference planned for late April.

Follow me on LinkedInCheck out my website



Source

Related Articles

Back to top button