IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Georgia Joins List of States Looking to Limit AI in Health Decisions

As artificial intelligence integrate across almost all sectors, lawmakers are working to safeguard their constituents against potential biases and set ethical standards around the technology.

The interior of the Georgia state Capitol building in downtown Atlanta.
Shutterstock
Georgia lawmakers are working to establish clear guidelines for AI use in the medical arena, specifically where the technology intersects with health insurance coverage decisions.

Last week, Rep. Mandisha Thomas, D-District 65, introduced legislation that would ban the sole use of AI in making insurance and health coverage decisions.

“No decision shall be made concerning any coverage determination based solely on results derived from the use or application of artificial intelligence or by utilizing automated decision tools,” HB 887 reads.

The bill goes on to specify that any decision concerning coverage that resulted from the use of artificial intelligence or automated decision tools be reviewed by an individual with authority to override the artificial intelligence or automated decision tools, which would essentially be an individual associated with the Georgia Composite Medical Board.

This isn't Georgia's first rodeo regulating AI in health-care decisions. Back in 2023, legislators passed HB 203, which limited the use of AI in optometric care and coverage decisions. That legislation mandates practitioners refrain from relying exclusively on AI assessments when addressing eye conditions and conducting evaluations.

Other state lawmakers have rushed to supplant older laws and, in some cases, create new ones that reflect the potential risks of AI use in the health and insurance space.

Colorado was the first state to adopt regulations focused on AI and insurance algorithms.

A Colorado law passed in 2021 that prohibited insurers — and any large data systems used — from unfairly discriminating based on “race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.” The state followed up on that bill in 2023 by adopting more direct legislation that limits how algorithms and predictive models are used in the industry called Regulation 10-1-1, which acts as a governance and risk management framework for life insurers' use of external consumer data.

Pennsylvania also introduced legislation aimed at greater transparency regarding how AI systems are used to review insurance claims last August, and in November, California's Privacy Protection Agency released draft rules to govern how companies using automated decision-making tools can use consumers' personal information.

This seemed to set off a chain of events because, at the beginning of this year, New York’s Superintendent of Financial Services (DFS) Adrienne A. Harris issued a statement regarding AI insurance uses after sending out a directive to all licensed insurers in New York.

“Technological advances that allow for greater efficiency in underwriting and pricing should never come at the expense of consumer protection. We have a responsibility to ensure that the use of AI in insurance is conducted in a way that does not replicate or expand existing systemic biases that have historically led to unlawful or unfair discrimination,” she said.

According to the press release, the letter outlined DFS’ expectations for how insurers develop and manage the integration of external consumer data and information sources, artificial intelligence systems, and other predictive models to mitigate potential harm to consumers. Connecticut and Washington, D.C., have also released similar directives.