AI

Madison artificial intelligence experts predict AI’s promise, threats | Business


A new generation of artificial intelligence tools will bring tremendous promise, but humans must regulate the technology and realize its limitations, a panel of experts told Madison business leaders this week.

The discussion on Wednesday, June 12, organized by the Greater Madison Chamber of Commerce, featured local experts from business, government and academia who expressed both excitement and trepidation about the ways AI will continue to transform everything from the way we work to the way we vote.

Existing AI tools, “trained” on massive data sets, can use data to respond to new situations or questions. Those tools are already being used, for example, to summarize lengthy documents, help nurses respond to a patients’ medical questions, or help state workers figure out whose unemployment insurance claims should be approved. 

Some people find that scary. Others find it promising. Whichever camp you’re in, one of the most important things to understand is that these tools aren’t actually thinking for themselves, said Jerry Zhu, computer science professor at the University of Wisconsin-Madison. 







AI panel 2024

A new generation of artificial intelligence tools will bring tremendous promise, but humans must regulate the technology and realize its limitations, a panel of experts told Madison business leaders this week at a discussion organized by the Greater Madison Chamber of Commerce.

 




“AI is not real intelligence, at the moment. It simulates that, and it’s very good at that. But … there are so many things online you can find that show how even the most powerful generative AI models seem to be crazy,” Zhu said, pointing to AI “hallucinations” and logical inconsistencies. 

The current tools don’t have common sense or common knowledge, and they can’t do logical reasoning, Zhu said. But he anticipates that within just five or 10 years, a  “new generation” of “much more powerful” AI technology will. 

“We will have to worry much less about hallucination and not knowing what it is saying,” Zhu said. “it’s going to be a much better tool … but it’s not going to replace us.”

Another thing many people misunderstand about AI is that the data used to build the tool can make or break it, said Taralinda Willis, general manager of public policy and issues management at FiscalNote, which uses AI to analyze proposed legislation. Willis also founded Curate, a company that uses AI to compile data on local government, which has since been acquired by FiscalNote. 

“(Just as) the calculator or computer is only as good as the person using it, AI is only as good as its training data set,” Willis said. “I think probably, in a lot of these conversations, what we’re not talking about is that training data set and how incredibly important that is.”

If that data set doesn’t represent the diversity of the population, or if it includes data shaped by human or institutional racism, the AI tool will produce biased results too. 

For example, if a user asks an AI image generator to create a picture of a “happy grocery shopper,” the person it shows is usually a woman, explained Christopher Mende, a Madison-based head of customer engineering at Google Public Sector. 

“Is that the kind of thing that we really need to be propagating out in the world?” Mende said. “So there’s unintentional or unconscious biases that can be really challenging to find, and we certainly want to find those, in addition to the conscious biases that can sometimes be found in tools.”

Mende said many of Google’s clients try to assess whether, in a given situation, a human would be harmed if something goes wrong with an AI tool, like if a human would be improperly denied a loan or insurance because AI miscalculated their risk. 

“All of these solutions should be accountable to a human to make that final decision,” Mende said, explaining that that’s why Google uses AI to help its human facilities managers decide how to adjust thermostats to cool its powerful data centers. The company doesn’t let AI run the system itself, he said, “ because if you mess that one up and you end up baking a multibillion dollar investment, that’s a pretty bad decision.”

Changing work, changing laws

Asked which workers would be most likely to be replaced by AI, Willis pointed to workers who do “common and repeatable tasks,” like copying information from invoices into bookkeeping software. 

Even fields like law, journalism, architecture and engineering could see major changes due to AI, Willis said. “All those things really need a person to look at it and sign off on it. But can AI pick up a big chunk of it? Probably.”

A common fear is that people will lose their jobs, said Rep. Samba Baldeh, a Democrat and former software engineer representing Madison in the Wisconsin Assembly. “The human mind is always afraid of change,” Baldeh said. “But change is inevitable. We have to deal with it all the time.”

But while AI will reduce or eliminate some jobs, it will create others, as more people will be needed to create and oversee these new technologies. 

“It’s just a question of really making sure that people who are impacted by AI are retooled to be productive in other areas,” Baldeh said, calling the process a sort of “human evolution.”

“It’s not going to be the end of the world,” Baldeh said.

As new tools emerge, lawmakers have sought to set new rules. Three hundred AI-related bills were introduced in state and federal legislatures across the U.S. in 2023 alone, Willis said. That doesn’t include legislation introduced by local governments. 

It’s easier to get policies passed in statehouses than in Congress, but that approach comes at a cost, Willis said. “The direction we’re heading is that each state makes their own rules and regulations on AI, and for large employers who operate in many states, such as Google, that patchwork of policy can be really challenging because (the rules are) all a little bit different.”

Willis predicts the election season leading up to the November presidential race will bombard voters with AI-created audio and video, potentially undermining Americans’ confidence in the voting system. Wisconsin is one of just 13 states that have enacted legislation restricting the use of deepfakes in political ads, she said. 

“In other areas of the country … people, for the first time, are going to be seeing deepfakes and think that they’re real,” Willis said. “I think that’s really a little bit scary.”

But AI technology is so novel it can be hard to even figure out how to regulate it, Zhu said, explaining that laws are designed to hold people responsible if the tools they use or make cause harm. With software, that makes sense, Zhu said, explaining that you can examine the code a developer wrote, find what caused the problem, and fix it. With an AI tool controlled not by code but by its massive training data set, that’s nearly impossible.

“It really is very difficult to answer the question (of) why a certain decision is made, and it’s even harder to say how to fix that,” Zhu said. “So we’re in this awkward situation: I believe that we have a desire as a society to have protections through regulation, but … how to actually do it is not clear at all at the technical level. So that’s something we need to bridge.”



Source

Related Articles

Back to top button