Generative AI

Generative AI is maturing in state government, officials say


Six months ago, many states were only beginning to organize task forces and craft early policies around new generative artificial intelligence tools, but officials this month told StateScoop that speculation about AI is more frequently turning into real projects.

Alaska Chief Information Officer Bill Smith said in a recent interview that his office is now spending more time thinking about the more immediate effects AI could have on state government.

“It’s starting to move into a much more tactical action plan phase where we’re actually putting live AI tools out there, whether it’s proof of concept or they’re moving into production,” he said. “It’s no longer just more ideation. It’s ideating, but also executing on some of those use cases.”

Connecticut CIO Mark Raymond, who at a recent IT conference advocated for an optimistic stance on generative AI in state government, said Connecticut is pushing ahead on new AI projects. He estimated that AI’s harmful products, such bias, can likely be circumvented.

“Lots of times people are afraid of AI because they are afraid of decisions being done at scale which they don’t understand. Today, people make biased decisions all the time,” he said. “One individual, that’s hard for them to do that at scale. With AI, we have the ability to not only watch what it’s doing, but to correct it. That part of it is not getting the kind of attention that I think the technology deserves.”

Though government officials are more frequently venturing out of policy discussions and starting new projects that use AI, most still remain cautious. Most officials StateScoop interviewed advocated initially only to pursue “low risk” uses of AI, such as implementations that don’t have direct contact with the public and don’t automate decisions, though each state varies in its willingness to automate tasks.

Josiah Raiche, Vermont’s chief AI and data officer, said his state is beginning to use AI for user “intent detection,” to better understand the extent to which Vermont websites and other digital services are organized sensibly.

He said two characteristics Vermont uses to weigh the ethics of any new AI use case are whether the implementation’s negative effects would be “transient” and “reversible.”

“If we make a mistake and somebody spends a hundred dollars because of that mistake, refunding them does not actually fully reverse that because that might have impacted their ability to pay rent or something like that,” Raiche said. “So we’re very careful around anything that could be financial. The other thing is we’re not doing any automated denials. At this point, we’re interested in exploring use cases for automatic approvals.”

Though technology officials in some states said they’re too busy with major IT modernization projects to concern themselves with generative AI, most have said they’re interested in aligning AI with the business challenges presented by other state agencies. And in states where AI is entering play, policy discussions are maturing — what states will and won’t do with generative AI is becoming more clear.

“One of the shifts we’ve seen in AI is more of it becoming real and also people having a sense of what things it’s good at and what things it’s not good at,” Raiche said. “There’s still a lot of hype, there’s still a lot of vaporware, but we’re seeing some more real use cases and also just gaining experience with it at the state level.”

Raiche said he thinks more people now grok that generative AI often struggles with context and relevance. That shortcoming, he said, must be resolved before Vermont directly exposes the public to generative AI’s outputs.

“You can’t look at the quality of the output and have that as a proxy for the quality of the content,” he said. “Like, the grammar, the spelling, the formatting of the output can be beautiful, and that doesn’t actually tell you anything about how good the content itself is.”

Washington CIO Bill Kehoe said he’s looking at generative AI for language translation and personalizing digital services.

“When it first came out, it was like this cool thing but we really didn’t know how to use it,” he said. “And now I think we’re getting much more educated about the practicality of, hey, let’s really look at some use cases, but let’s also have a good foundation around security, privacy, governance in place.”

In general, state government’s tendency is to create policies before cautiously trying low-risk uses of any new technology, including generative AI. At least one city is not being so careful: New York City Mayor Eric Adams last month faced criticism from journalists and civil rights groups for not deactivating a chatbot that was providing the public with faulty information about tenant and worker rights, among other topics.

Kehoe said he wouldn’t directly expose the public to generative AI output until the tools have been thoroughly tested.

“We would really want to rein in the chatbot initially,” he said, “before we just unleashed it and really understand the types of questions that you could answer and prevent the hallucinations or giving bad information.”

Colin Wood

Written by Colin Wood

Colin Wood is the editor in chief of StateScoop and EdScoop. He’s reported on government information technology policy for more than a decade, on topics including cybersecurity, IT governance and public safety.



Source

Related Articles

Back to top button