UC Davis Health uses AI models to leave no patient behind

UC Davis Health uses AI models to leave no patient behind

New algorithm can help identify patients in need of care management and advance health equity

(SACRAMENTO)

Artificial Intelligence is helping UC Davis Health predict which patients may need immediate care and eventually keep them from being hospitalized.

The population health AI predictive model created by a multidisciplinary team of experts is called BE-FAIR (Bias-reduction and Equity Framework for Assessing, Implementing, and Redesigning). Its algorithm has been programmed to identify patients who may benefit from care management services to deal with health problems before they lead to emergency department visits or hospitalization.

The team outlined their approach and creation of the BE-FAIR model in an article published in the Journal of General Internal Medicine. The paper states how BE-FAIR can advance health equity and explains how other health systems can develop their own custom AI predictive model for more effective patient care.

“Population health programs rely on AI predictive models to determine which patients are most in need of scarce resources, yet many generic AI models can overlook groups within patient populations exacerbating health disparities among those communities,” explained Reshma Gupta, chief of population health and accountable care for UC Davis Health. “We set out to create a custom AI predictive model that could be evaluated, tracked, improved and implemented to pave the way for more inclusive and effective population health strategies.”

Reshma Gupta
“We set out to create a custom AI predictive model that could be evaluated, tracked, improved and implemented to pave the way for more inclusive and effective population health strategies.”Reshma Gupta

Creating the BE-FAIR model

To create the system-wide BE-FAIR model, UC Davis Health brought together a team of experts from the health system’s population health, information technology and equity teams.

Over a two-year period, the team created a nine-step framework that provided care managers with predicted probabilities of potential future hospitalizations or emergency department visits for individual patients.

Patients above a threshold percentile of risk were identified, and, with primary care clinician guidance, determined if they could benefit from program enrollment. If appropriate, staff proactively contacted patients, provided needs assessments and began pre-defined care management workflows.

Responsible use of AI

After a 12-month period, the team evaluated the model’s performance. They found the predictive model underpredicted the probability of hospitalizations and emergency department visits for African American and Hispanic groups. The team identified the ideal threshold percentile to reduce this underprediction by evaluating predictive model calibration.

“As healthcare providers we are responsible for ensuring our practices are most effective and help as many patients as possible,” said Gupta. “By analyzing our model and making small adjustments to improve our data collection, we were able to implement more effective population health strategies.”

Studies have shown that systematic evaluation of AI models by health systems is necessary to determine the value for the patient populations they serve. 

Hendry Ton
“The BE-FAIR framework ensures that equity is embedded at every stage to prevent predictive models from reinforcing health disparities.”Hendry Ton

“AI models should not only help us to use our resources efficiently — they can also help us to be more just,” added Hendry Ton, associate vice chancellor for health equity, diversity, and inclusion. “The BE-FAIR framework ensures that equity is embedded at every stage to prevent predictive models from reinforcing health disparities.”

Sharing the framework

The use of AI systems has been adopted by health care organizations across the United States to optimize patient care.
About 65% of hospitals use AI predictive models created by electronic health record software developers or third-party vendors, according to data from the 2023 American Hospital Association Annual Survey Information Technology Supplement.

Jason Adams
“It is well known that AI models perform as well as the data you put in it — if you are taking a model that was not built for your specific patient population, some people are going to be missed.”Jason Adams

“It is well known that AI models perform as well as the data you put in it — if you are taking a model that was not built for your specific patient population, some people are going to be missed,” explained Jason Adams, director of data and analytics strategy. “Unfortunately, not all health systems have the personnel to create their own custom population health AI predictive model, so we created a framework healthcare leaders can use to walk through and develop their own.”

The nine step framework of the BE-FAIR model is outlined here.