Every year, predictive AI saves 50 lives in two ERs at UC San Diego Health

Every year, predictive AI saves 50 lives in two ERs at UC San Diego Health


Every year, predictive AI saves 50 lives in two ERs at UC San Diego Health

Editor’s Note: This is part two of our two-part interview with Dr. Karandeep Singh. To read part one, click here.

Yesterday in our new series of articles, Chief AI Officers in Healthcare, we spoke with Dr. Karandeep Singh, Chief Health AI Officer and associate CMIO for inpatient care at UC San Diego Health. 

He described how accountability for all AI in a health system must lie with the Chief AI Officer, and how to hold this hot new position, executives must have skills that encompass clinical and artificial intelligence – though there need not be a balance.

Today we talk more with the physician AI chief about where and how UC San Diego Health is finding success with artificial intelligence. We dissect one AI project that has shown clinical ROI – and get some tips for executives seeking to become Chief AI Officers at their own organizations. 

Q. Please talk at a high level about where and how UC San Diego Health is using artificial intelligence today.

A. We’re using it today largely in two different broad classes of use. One of those is predictive AI, and one is generative AI.

Predictive AI is where we use AI to estimate the risk of having a bad outcome, usually, and where we design and implement interventions to try to prevent that outcome. That’s something we currently have widely in use for sepsis in all of our emergency rooms across UC San Diego Health. It’s something we’re in the process of deploying across our inpatient and ICU beds, as well.

This is something we implemented as early as 2018. It’s something we have rolled out in a really careful way. It was designed by colleagues of mine at UC San Diego Health. One of the key things that differentiates this from some other work that’s been done in this space is that in the process of rolling it out, they actually designed a study to put on top of that rollout to see whether or not the use of this model linked to an intervention that largely alerts our nursing staff is actually helping patients or not.

What the team found is that this model is saving about 50 lives across two ERs in our health system every year. It’s beneficial to people, and we’re keeping a really close eye on it and looking for further opportunities to improve. So that’s one example of where we’re using predictive AI.

Another one is predictive AI for forecasting purposes. I already highlighted in yesterday’s interview one of the use cases by our Mission Control, where we’re using a model to forecast our emergency department boarding patients. And that helps us figure out what things we need to do when we anticipate we’re going to have a busy day tomorrow in two days or in three days, and something that we’re still designing some of the workflows around. We have some workflows already implemented in progress.

So, the other broad category of use cases is generative AI. We’re using some of the capabilities within our electronic health record that allow generative AI capabilities. One example of that is when a patient sends a message to their primary care doctor, the doctor has the option to reply in the usual way where they type out the entire response, or they can see a preview of an AI draft response and can decide if they want to use it or not as a starting point, and then edit that response and send that one along.

If the clinician opts to do that, we append a message at the bottom that lets patients know this message was partially automatically generated so they know there was some process of drafting that message involved that wasn’t just the clinician being involved. That’s an example of one where we found that, surprisingly, it actually increases the amount of time it takes to reply to messages.

But the feedback we’ve gotten is that it is less of a burden to reply to a message when you have a little bit of boilerplate text to start with than to start with just a blank slate. That’s one that we’re still refining, and that’s an example of one that’s integrated into our EHR.

There are other ones where we have built them in-house. In some cases, it’s work that was done in my academic lab, but in a lot of cases, it was work done by colleagues of mine that we’re now looking to implement as part of the Jacob Center for Health and Health Innovation. One example of that is we have a generative AI tool that can read patient notes and abstract quality measures.

Quality measurement abstraction is something usually very time-consuming. The main implication of that is it takes a lot of people to do it. But more importantly, we’re only able to review a really small subset of people’s charts just because it’s so time-consuming. So, we never get to reviewing most charts in the electronic health record.

What we’ve found so far is we can get more than 90% accuracy using generative AI to do some of these chart reviews and abstractions of quality measures where we say, did they meet this quality measure or not? There’s still some room for improvement still there. But the other critical thing is we can review a lot more cases.

So, we’re not limited to a small number per month because we can run this on hundreds of patients, thousands of patients. It really gives us a more holistic view into our quality of care beyond what we could even achieve currently, despite throwing a lot of resources and a lot of time at trying to do this well.

Those are the two broad categories: predictive AI and generative AI. We’ve got a lot of other work, a lot of other use cases in progress or already implemented.

Q. This story is about what it’s like to be a Chief AI Officer in healthcare, and you’ve discussed a number of projects you’ve got going. For this next question, could you pick one project and talk about how you, as the Chief Health AI Officer, oversaw the project, what your role was?

A. I can talk about our Mission Control Forecasting Model. This was something already implemented in an initial version when I got here to UC San Diego Health. I’ve been here for 10 months now. Some of the things I’m working on are on the runway, and some are just starting to be implemented.

My role in this model, though, is that while it was working somewhat well, there were clear days where the model would predict that we’re going to have a not-so-busy day tomorrow. Tomorrow would roll around, and it was much busier than what the model was saying it was supposed to be.

Anytime you have a model that’s doing forecasting, where it is predicting tomorrow’s information using today, and it’s really far off, the people who are using that tool start to lose faith in it – as I would, too. When this happened, I think once or twice, I said, “We can’t just tweak things now. We have to go back and look at what are the things the model is assuming as to what information it’s using to figure out why tomorrow’s prediction is not accurate.”

What did we do here? I sat down with our data scientist. We went through that model line by line looking at code. And what that helped us do is figure out key things we thought were in the model, but actually weren’t because they had gotten removed previously because it was found to not be helpful.

So, we said, “Well, why was it not helpful?” We did a bunch of digging and looked at some of those predictors and found that some of those were not helpful because they were actually capturing the wrong information. Based on the description of the predictor, it was capturing something different than what the code was actually doing.

Doing that over the course of about three to five months, we went from version 2 of our model, which was implemented when I first got here, to version 5.1 of the model, which went live last month. What’s happened as a result of that? Our predictions today are substantially better than our predictions were in January and February. And what that does is help us start to rely on the model to do workflows.

When the model is not accurate, there’s not a lot of appetite toward linking any workflow around it. But when the model gets more accurate, people start to realize the model actually says tomorrow is going to be a busy day, and it turns out it is a busy day, or it says it’s not going to be and it turns out not to be busy. That now lets us think about all kinds of things we could do to make our healthcare and access to care a bit more efficient.

What are my activities there? Figuring out with the co-directors of our Center for Health Innovation, our data scientists, some of our PhD students, what is happening on the data side, what’s happening in our AI modeling code side, what’s happening in our processes for how we go live with new versions of models and our version control, and then making sure as we upload those new models, that gets communicated out to our Mission Control staff so they’re in the loop on when to expect the model to change and what is actually changing.

So, we develop model cards we distribute, then we make sure that information is communicated out to a broader set of health leaders at our Health AI Committee, which is our AI governing committee for the health system. So really, it’s soup to nuts being involved in everything from how we’re pulling data all the way to how it’s being used clinically by the health system.

None of that is stuff I can do alone. As you notice, each of those steps requires me to have some level of partnership, some level of someone who has domain knowledge and expertise. But what I have to do is make sure when a clinician notices a problem, we can think about and brainstorm what in the upstream processes might be creating that problem so we can fix it.

Q. Please offer a couple of tips for executives looking to become a chief AI officer for a hospital or health system.

A. One tip is you really need to understand two different worlds and understand how they connect. If you look online, there is a lot of chatter and discussion about AI. There’s a lot of excitement about AI. There’s a lot of people just sharing their experience of AI, and all of that is good information to capture.

It’s also important to read papers in the space of AI and understand some real limitations. When someone says, “We need to make sure we monitor this model because it might cause problems,” you should know roughly what kinds of problems it could cause, what are key historical examples of problems caused by health AI, because you’re essentially going to be the AI domain expert for the organization.

One of the key things is, it’s a bit difficult to pivot from being a healthcare administrator leader into a Chief Health AI Officer unless you already have a substantial amount of health AI knowledge or are willing to engage in that world and get that knowledge and build the community.

Similarly, there are challenges to people who know the health AI side really well, but don’t speak the language of healthcare, don’t speak the language of medicine, can’t translate that into a way that can be digestible by the rest of healthcare leadership.

Depending on which of those two worlds you’re coming from, how you’re going to need to develop to be able to serve in that role, is going to be a little bit different. If you’re coming to healthcare already, then you’ve really got to make sure you have domain expertise in AI that is going to translate to making sure that when you say you’re accountable, you actually are accountable.

And on the AI side, you need to understand how the healthcare system works so as you’re working with health leaders, you’re not just translating and giving them your excitement about a specific method, but you’re saying, “With this new method, here’s the thing you need to do today that you can’t do, that we could do. Here’s how much we would need to invest, and here’s what that return on investment would be if we were to invest in this capability.”

There’s really a number of different skill sets you have to have, but there, I think, thankfully, are a lot of different ways in which you can have a strength in one area and not necessarily across the entire spectrum.

That’s where different health systems will take slightly different approaches to how they look at this role. Other companies, like payers, are going to look at this role a little bit differently. That’s okay. You shouldn’t hire this role simply because you feel like you’re missing out. You should hire this role because you already are using AI or you want to use it, and you want to make sure someone at the end of the day is going to be accountable to how you use it and how you don’t use it.

Click here to watch the interview in a video that contains BONUS CONTENT not found in this story.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication


Source link

UnitedHealthcare CEO Shooting Exposes Dark Reality for Industry Leadership

UnitedHealthcare CEO Shooting Exposes Dark Reality for Industry Leadership

The shooting of UnitedHealthcare CEO Brian Thompson exposed a dark reality that many leaders in the healthcare industry have grown accustomed to — one that involves being the target of threats, mental health issues, and even physical violence.

Chris Van Gorder, the president and CEO of Scripps Health in San Diego, told MedPage Today that it is a concern that affects everyone in his organization.

“Frankly, we receive threats frequently — most are veiled threats, former employees or others. We had a series of issues with sovereign citizen-type individuals shortly after COVID started easing up,” said Van Gorder. “We have also received ‘manifestos’ written by individuals, some who were patients, and of course unhappy family members from time to time.”

While the motives of Thompson’s killer remain unknown, his wife, Paulette, told NBC News that her husband had received “some threats,” and suggested that those threats were related to his position as the CEO of a major health insurance company. “Basically, I don’t know, a lack of coverage? I don’t know details,” she said. “I just know that he said there were some people that had been threatening him.”

Hours after his killing, Thompson’s homes also received bomb threats, according to CBS News.

Since Thompson’s death, UnitedHealthcare, Blue Cross Blue Shield, and CVS Health have removed some or all the information about their executive leaders from their websites. MedPage Today reached out to confirm if safety was the primary reason for those changes.

Gorder, who is a retired police officer and deputy sheriff, explained he is probably more alert to the real danger the threats he described represent, and has put in place measures to ensure the safety of the people in his organization.

“I’m probably a little better equipped with knowledge than others,” he said. “I also hired a retired FBI supervising agent as our head of security at Scripps, and he has responsibility for corporate security as well. But the shooting today did get my team’s attention.”

While Thompson’s death has drawn new attention to these kinds of threats to healthcare industry leaders, the issue has existed for years.

One high profile case involved the 2014 death of Cooper Health CEO John Sheridan and his wife Joyce in Skillman, New Jersey. The couple were found dead in their home with multiple stab wounds after local firefighters responded to a report of a fire. Initially, the county medical examiner determined the incident was a murder-suicide, but that decision was reversed after the couple’s sons — along with other local officials — challenged it.

Now, experts believe the couple was murdered and the house was set on fire to cover up the crime scene. The New Jersey attorney general’s office opened up a new investigation into the deaths in 2022.

Mental health challenges in the industry have also made news. Earlier this year, two executives of Retreat Behavioral Health — founder and CEO Peter Schorr and Chief Administrative Officer Scott Korogodsky — died by suicide days before the organization closed several branches in Connecticut without warning.

Violence has long plagued workers at every level of healthcare. In fact, data from the Bureau of Labor Statistics shows that healthcare and social assistance professionals experience the highest rates of workplace violence in the U.S.

Parsing those statistics reveals even worse outcomes for frontline healthcare workers. For example, two-thirds of emergency department physicians reported being assaulted in 2022, and nearly half of nurses reported experiencing physical violence at work, according to the American Hospital Association (AHA).

This has prompted several professional organizations, including the AHA and the American Medical Association, to increase their advocacy for further state and federal protections for healthcare workers.

In fact, the AHA has championed legislation in Congress focused on increasing protections for healthcare workers in part by establishing federal penalties for any acts of violence or intimidation targeted at healthcare workers. The legislation has versions in the House and Senate, but multiple sponsors of the bills will be leaving Congress at the end of the current term.

Despite these efforts, far fewer resources or even datasets appear available to quantify and understand the true scope of violence targeted at individuals in leadership positions in healthcare.

Cheryl Clark and Rachael Robertson contributed reporting for this story.

  • author['full_name']

    Michael DePeau-Wilson is a reporter on MedPage Today’s enterprise & investigative team. He covers psychiatry, long covid, and infectious diseases, among other relevant U.S. clinical news. Follow




Source link

Access Denied


Access Denied

You don’t have permission to access “http://www.cpsc.gov/Recalls/2025/Anker-Soundcore-Lithium-Ion-Battery-Powered-Bluetooth-Speakers-Recalled-Due-to-Fire-Hazard-Sold-Exclusively-on-Amazon-com-by-Anker-Innovations” on this server.

Reference #18.91c1c917.1733518771.b1c99855

https://errors.edgesuite.net/18.91c1c917.1733518771.b1c99855


Source link

Access Denied


Access Denied

You don’t have permission to access “http://www.cpsc.gov/Recalls/2025/NetZero-USA-Recalls-High-Bay-LED-Light-Fixtures-Due-to-Fire-Hazard” on this server.

Reference #18.91c1c917.1733506250.afb7ddae

https://errors.edgesuite.net/18.91c1c917.1733506250.afb7ddae


Source link

Access Denied


Access Denied

You don’t have permission to access “http://www.cpsc.gov/Recalls/2025/QVC-Recalls-More-than-One-Million-Temp-tations-Oven-Gloves-Due-to-Burn-Hazard” on this server.

Reference #18.91c1c917.1733493687.ad9fadca

https://errors.edgesuite.net/18.91c1c917.1733493687.ad9fadca


Source link

Access Denied


Access Denied

You don’t have permission to access “http://www.cpsc.gov/Recalls/2025/Academy-Sports-Outdoors-Recalls-Redfield-12-and-18-Gun-Fireproof-Safes-Due-to-Serious-Injury-Hazard-and-Risk-of-Death” on this server.

Reference #18.286d3e17.1733481152.2abcb4a

https://errors.edgesuite.net/18.286d3e17.1733481152.2abcb4a


Source link

Access Denied


Access Denied

You don’t have permission to access “http://www.cpsc.gov/Recalls/2025/Polaris-Recalls-Ranger-Recreational-Off-Road-Vehicles-and-ProXD-and-Bobcat-Utility-Task-Vehicles-Due-to-Injury-Hazard-Recall-Alert” on this server.

Reference #18.91c1c917.1733468604.a8d9e933

https://errors.edgesuite.net/18.91c1c917.1733468604.a8d9e933


Source link

Access Denied


Access Denied

You don’t have permission to access “http://www.cpsc.gov/Recalls/2025/Belkin-Recalls-Portable-Wireless-Battery-Chargers-for-Smart-Watches-Due-to-Fire-Hazard” on this server.

Reference #18.91c1c917.1733456075.a47ce72e

https://errors.edgesuite.net/18.91c1c917.1733456075.a47ce72e


Source link

Access Denied


Access Denied

You don’t have permission to access “http://www.cpsc.gov/Recalls/2025/Polaris-Recalls-RZR-XP-1000-and-XP-4-1000-Recreational-Off-Road-Vehicles-ROVs-Due-to-Fire-Hazard-Recall-Alert” on this server.

Reference #18.91c1c917.1733443475.a23fb798

https://errors.edgesuite.net/18.91c1c917.1733443475.a23fb798


Source link