AI

5 things to know about artificial intelligence


We already live in a world full of artificial intelligence. It’s been a long time coming, but technological improvements in recent years captured headlines and prompted many questions.

How will AI affect daily life?

Is this the 21st century’s Industrial Revolution?

In early 2023, the Washington Post claimed “the AI ‘gold rush’ is here.” Companies like Alphabet and Microsoft made job cuts so they could invest billions of dollars in generative artificial intelligence. This surge followed the release of ChatGPT, an AI-based chatbot system, in late 2022.

But it’s not all bad.

On March 6, Sam Ransbotham, a professor of analytics at Boston College’s Carroll School of Management and co-host of the podcast “Me, Myself, and AI,” gave a talk through GBH’s Forum Network about the role of artificial intelligence at the Charles River Museum of Industry and Innovation.

Here are five takeaways.

Most people use AI already.

Ransbotham ran a study to assess the usage of AI in workplaces. Respondents initially denied using AI when asked, but changed their minds when they were presented with examples of AI-powered tools like GPS and spell check.

“That is part of the insidious nature of the technology revolution, is that it just gets pervasive pretty quickly,” Ransbotham said.

Using AI offers a competitive advantage.

“Other humans that are better at using artificial intelligence are going to take your job,” warned Ransbotham.

It’s not that “robots” are going to take your job, he said. But individuals who adapt to the rise in artificial intelligence technology will remain employed and perform better professionally than those who do not.

It’s a difference that scales up to the organization level: companies who fail to utilize AI, Ransbotham said, risk widespread job loss and financial ruin.

Despite the hype, there could be downsides.

AI-powered tools could help improve skills based on real-time feedback (think chess), enhance classrooms with personalized learning and refine surgery techniques by providing practice simulations. But Ransbotham also stressed the possible drawbacks associated with the technology.

He used essays as an example: sometimes writing a bad essay, he argued, is an important step in learning to write a good one. Using AI-powered tools from the start eliminates that process.

“The point here is that we may be not just losing the ability to do a skill, we may be losing general skill abilities. I don’t think we know that yet, but I worry about that,” said Ransbotham.

There’s not much regulating these companies.

“We have very little control or knowledge about what’s happening behind the scenes in these tools. Who’s running them, who’s objectives that they’re after,” he said.

Ransbotham compared today’s generative AI companies to the unregulated meatpacking industry in the early 20th century, which spurred the creation of the Food and Drug Administration, as a possible template for how the United States could approach regulating AI.

Tools are agnostic. Humans will decide how to use AI.

Ransbotham compared today’s workers who have just been given AI-powered tools to cavemen who’ve picked up rocks for the first time. Those rocks, he said, could be used as a weapon to attack other cavemen — or, as a hammer to build with.

“We can use a tool in many different ways. People decide how we use tools,” said Ransbotham.

Though artificial intelligence’s history is short, it seems it is here to stay. What matters now is how governments and individuals decide to manage and leverage it going forward.

Watch Ransbotham’s full talk here:





Source

Related Articles

Back to top button