In December, UC Davis Health’s CEO David Lubarsky and Chief AI Advisor Dennis Chornenky met with Director of News and Media Relations Pamela Wu to discuss trends, possibilities and challenges with AI in health care, including the relationship between human intelligence and AI. Here are excerpts of the discussion, edited for length. The full 39-minute video is available at ucdavis.health/48v54mU.

Q. How is UC Davis Health approaching AI’s role and patient care?

Lubarsky: The first and most important thing is that doctors and nurses are in charge. Doctors and nurses will always be in charge of not only the decision-making, but in being the partner to the patient in the decision-making. And, you know, AI is artificial intelligence, but it’s not. In health care, it’s really augmented intelligence. It’s about giving your doctors and your nurses more tools to make better decisions for the patient.

Q. Patients want their care personalized to them. We hear this over and over, and we aim to deliver that. How big of a role can AI have in personalizing care?

Lubarsky: I think AI is actually the route to getting truly personalized recommendations... When Amazon sells you the reading lamp that goes with your book purchase, it knows what you want...It’s running algorithms in the background all the time. Those personalized recommendations foundationally are from AI. There’s no reason we can’t apply… that same thinking, if you will, so that all the past decisions and past diseases and past labs on a patient’s chart can help inform what the next step should be for that patient in their journey towards wellness…

And we’re used to “self-service” (through digital technology) now, especially the younger age group. A study said 44% of young adults 18 to 34 believe that by using the internet and Chat GPT, they can know as much as their doctor about a disease process… We know that’s not really true, but the point is, we are evolving to where people expect to quickly master a topic and become a true partner in their care. And I think that’s where this is going. Self-identification of a problem, self-diagnosis, self-triage and self-treatment — if guided correctly by health professionals — could truly extend our ability to serve what is an ever burgeoning need, and (provide) personal health care.

Chornenky: There’s certainly a reason we have medical schools and licensing and residencies, and I think it’s very important that we build off the value infrastructure and responsibility and guardrails we do have in place. At the same time, at least personally, I feel like we haven’t always done a great job as a society of educating consumers and patients about how to really achieve well-being and wellness. There is a little bit of a mentality that if the tiniest thing is wrong with you, you go to your doctor and your doctor’s going to fix it. That your wellness is your doctor’s responsibility in some ways. And of course, it’s primarily our responsibility starting as patients, as consumers. And so to the extent that AI, especially generative AI, can help direct consumers to live healthier lives, they’re going to need less care. And when they do need care, they will have better guidance about the kind of care that they might need, how to connect with the right professionals, how to stay on course with the right recommendations, and why it’s important to listen to medical professionals.

Q. When it comes to AI and health care, what are regulators keeping a close eye on?

Chornenky: That conversation has rapidly accelerated, especially (recently)… We had the AI executive order coming out of the White House… that builds on some previous executive actions, but really takes it further now, looking at more specific requirements for the private sector… to help ensure consumer safety and patient safety with use of AI technology. So things like watermarking AI-generated content for example, or other forms of disclosure so that folks know that they’re speaking to an AI chatbot — rather than a chatbot that’s pretending to be a human, in order to try to create a more human experience. I think it’s very important that we always help make people aware of what exactly they’re interacting with and in what ways.

Q. What is the relationship between artificial intelligence and human intelligence in terms of how they reinforce one another?

Lubarsky: So we’re working with a company that does remote patient monitoring, and it has eight different vital signs it collects every minute of the day. That’s 1,440 minutes, eight vital signs each minute — so 11,500 or so data points per patient. Applying AI which looks at patterns of these vital signs can very, very, very early on detect who might be deteriorating, allowing the doctor and the nurse to keep a closer eye on that patient, to intervene earlier, to be prepared for a deterioration. It’s not telling the doctor what to do. They’re going to eventually expand it to 16 variables — now there’ll be 24,000 data points per day per patient. A human being can’t process that. And they can’t say, “oh, you know, this variable moved here, and then in relation to this one, it moved here.” It’s just too complicated for the human brain. But AI is built to analyze those patterns. So number one is pattern identification.

…Two thirds of patients would like the doctor and their medical record to know all the information collected on their (smart watch/device). There are too many data points. But it could be incredibly valuable if an AI engine was running behind it and said “I’ve looked at your sleep pattern and you’re not sleeping through the night anymore. What are the causes of that? Are you drinking alcohol? Are you anxious? Have you changed pillows? Are you having allergy attacks in the night?” It prompts your (provider) to ask the right question. They can’t possibly have time to parse through all that data. AI can make your care more personalized, and it doesn’t mean it’s making the decisions either for you or your doctor. It’s just packaging ideas and information in a way that prompts that personalized attention.”

Q. What is the role of generative AI — which generates new data, text or other media — in health care, and where do you see that headed?

Lubarsky: …More than 40%, and often more than 50% of the time nurses spend is writing notes and documenting what they’ve done. None of that is necessary. For physicians, their biggest complaint is filling stuff about patient visits into the electronic medical record. We have added very low-value interactive time with keyboards to the most expensive labor in the United States. We’ve turned our brightest and best and most compassionate health care providers into typists. And so what generative AI will do is free them. It doesn’t mean we will let AI write the notes. But that (AI) tool can erase the burden. It can eliminate the contribution of overzealous documentation leading to burnout. That’s the number-one initiative we’re pursuing at UC Davis Health, because we care about our providers. Because when we care about them, they’re able to care for their patients. At the (medical) office, there’s always a keyboard and a screen either between you and the doctor and the nurse, or off to the side. So they’re constantly talking to you and then turning around and typing. We’re going to eliminate that. We’re going to eliminate the electronic barrier that we’ve placed between patients and providers. And generative AI is going to do it.

Chornenky: … I think generative AI will have more transformative impact on health care in, let’s say the short-to- medium term than any other AI/ machine learning methodology. Others will probably have their day in the next 10, 20, 30 years… but right now is really the time of generative AI. And to that point, thanks to Dr. Lubarsky’s vision and our CIO and Chief Digital Officer, Dr. Ashish Atreja, we just had a very successful launch of a new collaborative bringing leading health systems and payers and academic medical centers, covering the entire country, together to help advance responsible adoption of generative AI technologies. Focused on execution... discovery… validation of use cases… across member organizations to help build capacity mutually together. Because in isolation, these technologies are moving too quickly for any one organization to really be able to figure it out on its own. There are so many research papers coming out on generative AI right now. It was near zero per month in certain publication databases even a year and a half ago. But now it’s getting to hundreds per month and very quickly climbing.

Lubarsky: …If you go to Amazon and want to parse through 14,000 reviews… now there’s an AI-generated blurb. That doesn’t always mean it’s all the information you’re seeking, but it’s a pretty good summary and it’s very pertinent. And it’s the same thing we’ve done. If I’m a little worried about a patient’s hemoglobin, I can ask the record to provide all the hemoglobins for this patient for the last 10 years. You can have a table generated. Where it would previously take a long time for a doctor to parse through individual labs. The capability of, again, personalizing care by extracting with a simple query all the pertinent information.

And then you could ask Chat GPT, what are all the causes of low hemoglobin And you’ve thought about 39 of the 40, but hadn’t thought about that 40th. It’s not saying what you should do. It’s doing a complete information search for you so that you don’t forget anything… Chat GPT can (currently) give some false information, but the next generation will provide references if you want them for each of its recommendations or statements. Once that happens, we can now get the validation and verification that it was a correct interpretation…

Q. What do you think warrants skepticism as we see more AI in health care? What issues and challenges are you keeping an eye on?

Lubarsky: We’ve made it really clear that our health care providers cannot, should not, and will not ever seek judgment or courses of treatment through what’s suggested on the internet, and specifically with Chat GPT. We added an AI paragraph to our medical staff bylaws about what constitutes the responsibility of the physician to the patient. And we made it really clear that they were not to ever rely on that in terms of driving their decision making.

Chornenky: …There is this potential for hallucinations, these kind of fake responses, and so this is one of the reasons it’s so important to double check everything for human beings. We’re just not at the point where the large language models’ failure rate is one in a million or one in a billion. It can be a lot more frequent.

And it’s also a bit of a social choice or choice for us in terms of technology and how we want to use it. Because in some ways, hallucinations actually can be a measure of creativity in a model. So if you completely want to eliminate the potential for hallucinations — and maybe we (do) want that in certain environments — you’re really reducing that model’s ability only to very precisely and almost verbatim kind of spit back things it’s gotten from its training data. But if we want to give it a little bit more flexibility for interpretation or suggestions or creative solutions to certain problems, we sort of have to set the parameters a little bit differently…

That’s a social conversation and how our interaction with this technology will evolve over time. But I think for environments like ours in health care, especially now in the earlier stages of these technologies, we really need to err on the side of caution.

Lubarsky: The part that worries us is down the road. It’s five years, 10 years before we’ll have the right level of insight into data to really let AI really suggest treatment suggestions. But all the rest of it’s really worked out and we’re just not employing it… (Summarizing notes)… pattern recognition… facial recognition. We can do all that and not cede one ounce of responsibility or decision making to computers. We can make doctors more efficient... When they added AI into the mix with breast-trained radiologists, they were able to cut the number of people required to do a day’s worth of readings in half. You may say “someone’s going to lose their job.” No, no. Only half the women in America who should have their breast mammograms get them read. Imagine if we, without adding one penny to the labor workforce, can now get to 100% of women and have their breast mammograms read by professionals.

We will never, ever be able to catch up with the demand right now because of the aging of the population, the expansion of possibilities, and hopefully a continuing journey towards wellness for a much longer period of time in life. We need to change how we work. We will never be able to fill the gap by just training more people. AI allows us to change the work so we’re all working at the very top of our capabilities... It is going to make us better at treating people who need to be treated.

Q. How do we make sure we’re not perpetuating inequities by looking at old patterns to inform new ones?

Chornenky: What we really need to do is provide… better access to more diverse, more equitable data sets… Historically health care data has been so siloed and difficult to access… One very interesting thing that I think is going to help with this is the federal government is really trying to promote the use of privacy preserving technologies… (for) machine learning modeling on data that stays encrypted. The data never has to actually get exposed or unencrypted… We can kind of skip risks (of reidentification) and still provide better access for folks that want advanced medical science using these more diverse sets.

Q. If there’s one takeaway you want our patients and employees to know, what would that be?

Lubarsky: AI is augmented intelligence. It’s for every employee, every nurse, every doctor to use on behalf of their patients — for whom they are solely responsible. And we will never cede control of our care for human beings to computers.