AI

Google’s new ‘AI Overview’ feature shows the tech’s weaknesses, BU expert says


Earlier this month, Google launched its “AI Overview” feature, which uses artificial intelligence to summarize search results into short paragraphs. It offers advice and answers that are not just wrong, but often bizarre, with answers that sound like TikTok challenges, such as putting glue on pizza or eating rocks.

In a blog post Thursday, Head of Google Search Liz Reid said that users are more satisfied with their search results since the feature’s launch. But she also said the company would be scaling back how frequently “AI Overview” pops up to provide an answer.

GBH’s All Things Considered host Arun Rath discussed the rise of generative AI with Brian Kulis, a machine learning researcher and professor at Boston University. What follows is a lightly edited transcript.

Arun Rath: So first off, tell us about how quickly this technology has exploded. We’ve been hearing about it the last couple of years, and now I feel like I’m seeing these search engine options all over the place now.

Brian Kulis: Well, machine learning has been around for a long time, but recently there’s been this exponential increase in the power of these models and the amount of data that people are using to train the models. So really within the last — I don’t know — five, ten years, we’ve seen a tremendous amount of progress in the area of generative AI.

“I suspect that in some ways, we’ll just have to get used to the fact that these models are sometimes are wrong.”

Brian Kulis, Boston University machine learning researcher

Rath: And let’s talk about what’s happened with Google in particular. It’s not the first example of generative AI not being quite there yet, but it feels like a bigger deal with Google. Some people are talking about how do you kind of expect a certain amount of reliability in what you get from Google, and to have Google feeding out bizarre or wrong results is not good for for the brand.

Kulis: Right? I mean, you said exactly what I think a lot of people are thinking, which is that Google is this company that provides information to people. They expect it to be correct. Perhaps other companies, such as OpenAI, have a little bit more leeway in terms getting things not quite right. I think that — and this is somewhat speculative — but I think that Google is perhaps rushing things out to market a little bit here in the hopes of trying to compete with OpenAI. So what we’re seeing is maybe a feature or a product that doesn’t quite have all the bugs worked out.

Rath: It seems like it is improving a bit. Just before talking, I was trying to get it to say bizarre things, and I wasn’t having any luck. How serious are the problems, and how quickly do you think they can be solved?

Kulis: Well, I think there’s a fundamental issue which is difficult to solve entirely, which is that these models, which are trained on data that humans have generated, they’re never going to be 100% accurate. So when you do things like you put safety filters in or you try to remove erroneous content, that’s never going to be a perfect system. You’re never going to be able to get all of those things out of there. So there’s going to be a lot of iterative refinements that go on.

Over time, while these things are sort of slowly improved, I don’t know if a system like this will ever be able to be 100% accurate. I suspect that in some ways, we’ll just have to get used to the fact that these models are sometimes are wrong.

Rath: Is that problem baked into this system, or is there maybe a different approach to AI that could get around those problems?

Kulis: It’s baked into the current systems — because the way that the current systems are trained is, they take data from the internet. And given some say, piece of text from a webpage, they try to predict what’s the next bit of text.

So if you’re giving it correct information and it’s predicting the correct next thing in the text, that’s fine. But if it’s getting information from sources that may not be correct, say Reddit or some sort of fake news or something that gets put in there, it’s going to learn that incorrect information.

So unless we can come up with a way to eliminate that kind of training data from these systems, it’s going to be hard to remove that entirely.

Rath: Do you have a sense of how likely it is we might see regulation of this technology — here or, say, in Europe, where they’re more aggressive about internet regulation?

Kulis: Right. I think you’re going to see regulation happening, and I’m not exactly sure how that’s going to look. But whenever you have these systems used for applications that are critical — say, in health care and things like that — you’re going to need to have some guarantees, or there could be some serious consequences.

It’s the same thing with self-driving cars, right? There has to be regulations of self-driving cars, or they are going to be potential life-threatening issues down the line.

But I would say that, whatever we do, we should be consulting with people who are building these systems — because they’re often the ones who understand how they work the best, and they’re the ones who know oftentimes what kind of errors creep into them. Whenever you design regulations, it’s good to understand exactly what these systems are actually doing.





Source

Related Articles

Back to top button