AI

Texas legislature explores rules for artificial intelligence – NBC 5 Dallas-Fort Worth


Social media forever changed politics after its emergence in the mid-2000s. Artificial intelligence may have similar far reaching impacts. Researchers and lawmakers now study the newest AI, worried about the technology running wild online without safety or ethical regulations.

NBC 5 traveled to Austin to understand its future impact and spoke with the new chair of the Texas House Select Committee on Artificial Intelligence about the coming months.

In short, the future is here ready or not. Fake videos and pictures of Tom Cruise, former President Obama, former President Trump, and current President Biden proliferate around the internet already. Manipulated videos and audio of these leaders may confuse voters with fake videos of them saying things they didn’t say.

“Without doing very much work at all. Almost anyone can produce a piece of political propaganda that may be convincing to voters,” said Zelly Martin, a PhD researcher at the University of Texas’s Propaganda Research Lab.

Martin is one of around thirty people working with the lab in Austin collecting, analyzing, and publicizing examples of AI impacting elections.

Texas House Speaker Dade Phelan just created a select committee on artificial intelligence to come up with changes in law next January 2025 when the legislative session begins. That will come after the 2024 Presidential election when AI may play a key role in the outcome.

Speaker Phelan appointed Rep. Giovanni Capriglione, R – Southlake, to chair the committee. Capriglione is also the co-chair of a new state advisory board on artificial intelligence with a fellow North Texas lawmaker, Senator Tan Parker, R – Flower Mound.

Rep. Capriglione tells NBC 5 that making a fake video of him, a Republican, saying he hates former President Donald Trump and is voting to President Joe Biden and using algorithms around the internet – should be against the law.

“Anytime there’s a major technological advance there’s a risk that comes with it. Obviously, with elections and other things we’re worried about, deep fakes, changing people’s audios, and simply creating new tweets and such. It’s a potential risk not just to the candidate but also to the voters themselves,” said Capriglione.

The chair says the new committee will pitch punitive fines, criminal penalties, and guidelines to the entire legislature next year. Their first interim report is scheduled to come out in May.

“Whether it’s the social media companies or email providers, they need to know that there are things that they should not be allowed to transmit or distribute,” said Capriglione.

The researchers at UT Austin say this issue should get bipartisan support.

“The bottom line is no one wants to be manipulated right. So that’s kind of like, we can find common ground,” said Dr. Inga Trauthig, the lab’s director.

Trauthig and Dr. Sam Woolley’s team are trying to avoid what happened twenty years ago with the emergence of social media, which was vastly understudied until its world-changing impacts were already reality.

“There was so much excitement about its democratic potential and not as much thought about how it may be used by authoritarian regimes, by people who were working to manipulate public opinion or stifle free speech,” said Woolley.

In short, the downsides. “Which are big,” said Woolley.

Their team is monitoring how major companies like OpenAi, Microsoft, and Meta roll out their technology, hoping to publicize bad actors and bring accountability.

“I think personally for holding accountable actors that are involved in this,” said Trauthig, “Just by sitting down and explaining how the information manipulation is happening, on which platform, with which tools. Just provide information that can be really helpful.”

The beginnings of that idea are already in the works. Last month the United States Department of Commerce released a report calling on the companies, local, state, and national lawmakers to “expose problems and potential risks, and to hold responsible entities to account.”

Federal Commerce Department staff hope to give guidance on best practices, require people to disclose when they’re using AI, and keep legal liability so people can file lawsuits against bad actors.

“You can’t steal people’s identities in this country. You can’t defame people. Those are laws exist. In some ways, people think you need to reinvent the wheel and create these news laws. In my view is we have to hold people accountable to the same laws we’ve always had but just do it online,” said Woolley.

One recent example that worries Woolley and Trauthig runs through North Texas. The New Hampshire Attorney General named an Arlington man and company behind the AI-generated robocalls of President Biden falsely urging voters to stay home during that state’s primary election.

In a statement to NBC 5, the NH AG’s office had no comment because the investigation is ongoing.

“Some of this innovation for better or worse is happening in our state. So we need to think what that means for both Texas and American democracy,” said Woolley.

The technology also may drastically improve how the government operates. The Texas advisory council on AI will focus on how AI can be used to improve serves in the state. In the first council meeting, the Texas Department of Transportation told the members about a pilot program using AI to monitor traffic cameras and automatically sending emergency crews when an accident is detected. The department also has reduced the time to generate invoices from weeks to seconds.

“I think all of us want this technology to succeed. It’s incredibly innovative. We want this to happen in Texas but at the same time we want to mitigate those risks,” said Capriglione.

The overall goal of the council and committee is to encourage positive use and create laws to punish bad actors.

“We have an opportunity now to look at that and begin making those rules now,” he said.



Source

Related Articles

Back to top button