AI bias evaluation efforts are uneven across U.S. hospitals

You May Be Interested In:Investigators say a Delta jet descended too quickly before Toronto crash last month



While around two-thirds of U.S. hospitals are using AI-assisted predictive models, just 44% of hospitals evaluate these models for bias, raising concerns about equity in patient care.

These were the results of a recent study conducted by the University of Minnesota School of Public Health and published in Health Affairs, which analyzed data from 2,425 hospitals across the country.

The study highlighted disparities in AI adoption, noting hospitals with greater financial resources and technical expertise are better equipped to develop and evaluate their AI tools compared to under-resourced facilities.

The report also found hospitals are primarily using AI tools to predict inpatient health trajectories, identify high-risk outpatients, and streamline scheduling.

UMN School of Public Health Assistant Professor Paige Nong explained one of the key questions currently driving her research is how hospitals without extensive financial resources or technical expertise can ensure the AI tools they adopt are tailored to the specific needs of their patient populations.

“We don’t want these hospitals stuck with two bad options, which are using AI without the necessary evaluation and oversight, or not using it even though it could help with some major organizational challenges,” she said.

She said using the information provided in predictive model labels described by the Assistant Secretary for Technology Policy in the HTI-1 rule is one step organizations can take.

These labels provide some crucial information to hospitals so that, even if they can’t build bespoke models for their patient populations, they can be critical consumers of the tools that are available to them.

“Even if that information isn’t easily accessible, they can and should ask their vendors for that information,” Nong said.

She admitted there is a lot of room to improve when it comes to bias evaluations.

“First, conducting the local evaluation we discuss in the paper is a valuable step to ensure that AI tools are functioning well for all patients,” she said. “Second, looking at the predictors that are driving the output is helpful.”

If organizations can see that the predictors could be biased – things like income or religious identity, they can avoid those tools.

She added thinking carefully about what the output of a tool means for patients is important.

“If a model predicts missed appointments, for example, how can the human decision making around that tool be fair and ethical, rather than perpetuating bias?” she said.

Nong said she is excited to see excited to see how healthcare professionals can bridge the digital divide between well-funded hospitals and under-resourced ones when it comes to AI adoption and evaluation capacity.

“On the policy side, we describe various examples of valuable collaborations and partnerships in the paper like Regional Extension Centers, AHRQ’s Patient Safety Organizations, and others,” she said.

She noted the Health AI Partnership is one group trying to do this type of networked technical assistance.

“On the practice side, IT professionals can engage with their communities and professional associations or networks to identify the needs of under-resourced care delivery organizations and provide important insights and support,” said Nong.

Nathan Eddy is a healthcare and technology freelancer based in Berlin.
Email the writer: [email protected]
Twitter: @dropdeaded209



share Paylaş facebook pinterest whatsapp x print

Similar Content

Russian exiles push Western countries to support Ukraine
Peter Hegseth’s tattoos are raising some eyebrows
Far from the front lines, Ukrainians fight a war to preserve their culture
Far from the front lines, Ukrainians fight a war to preserve their culture
Couples wed as landmark same-sex marriage law takes effect in Thailand
Couples wed as landmark same-sex marriage law takes effect in Thailand
Severe bleeding: First aid
Cancer pain: Relief is possible
AI will boost workflow, workforce development and patient safety in 2025
AI will boost workflow, workforce development and patient safety in 2025
6 Privileged Phrases That White People Say Without Realizing It
6 Privileged Phrases That White People Say Without Realizing It
NextGen News | © 2025 | News