IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Indiana’s New AI Policy Calls for Pre-Deployment Assessments

The state’s new risk assessments aim to strike a balance between harnessing the benefits of AI and managing data and ethical concerns. Meanwhile, Indiana’s first customer-facing AI tool is now in service.

Closeup of two people shaking hands with the outline of a human head with "AI" in the brain superimposed over the image.
A new artificial intelligence policy in Indiana will require most AI projects to pass a maturity assessment before they’re implemented.

The new policy, adopted Feb. 20, was created by the Office of the Indiana Chief Data Officer and will apply to a majority of AI initiatives, with the exception of employees using web-based generative AI applications such as ChatGPT as needed.

Indiana’s Chief Privacy Officer Ted Cotterill said the goal of the policy is to create a framework that encourages agencies to harness AI to improve their work in responsible and ethical ways.

“We don’t want to hide behind the risks and say, ‘We’re not going to innovate.’ We consider ourselves to be a pretty innovative state and innovative in administration,” Cotterill said.

As the general counsel for Indiana’s Management Performance Hub, Cotterill said there’s plenty of data-driven potential for AI, including automation, efficiency and personalization in customer experiences. Creating this policy was a way to make sure agencies can explore the technology while mitigating data privacy, intellectual property, ethical and cybersecurity risks.

The pre-deployment assessment is based on the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework and includes 72 subcategories.

“Ideally our development teams — whether in-house, vendor-based, whoever it may be — are planning, designing and developing to the NIST framework, knowing that pre-deployment, we’re going to want to conduct this maturity assessment and then go from there. We hope the ultimate result is the adoption of this human-centric AI-enabled IT as opposed to the wild, wild West,” Cotterill said. “We want to do it the right way.”

The policy also requires, in accordance with the Indiana Fair Information Practices Act, that a “just-in-time” notice be sent to users interacting with AI systems, informing them how AI is using their data.

Well before the new policy was recently released, Cotterill worked with one state agency to create an AI-enhanced tool that provides a “just-in-time" notice — and it’s already in service.

The Department of Workforce Development uses Uplink, an unemployment claimant self-service online filing system. A popup informs users about the new AI-enhanced component and that they can opt in or out of the tool, which provides career recommendations based on their data.

“We worked with DWD and their vendor to train a model on workforce data and education data, and then separated the model from the underlying data and handed the model itself to the DWD for use in its app,” Cotterill said, attributing the tool’s success to a strong collaboration with DWD’s Chief of Staff Josh Richardson and CIO Chris Henderson. “We’ve been working on this for some time and shepherding it through to where it’s now live. They’re already talking about what the enhancements might be in the future.”

Cotterill made it clear that the new statewide policy is a work in progress, and allows exceptions. He foresees updates and changes as the landscape of AI for government continues to evolve. In the meantime, a complementary AI policy will soon be released by the Indiana Office of Technology.

“Our policy is really intended to complement theirs, which is going to be broader to capture the entire universe of AI and state government,” Cotterill said.
Nikki Davidson is a data reporter for Government Technology. She’s covered government and technology news as a video, newspaper, magazine and digital journalist for media outlets across the country. She’s based in Monterey, Calif.