The radiologist was dead.

Or at least that’s what artificial intelligence (AI) experts prophesized in 2016 when they said AI would outperform radiologists within the decade.

Today, AI isn’t replacing imaging specialists, but its use is leading health care providers to reimagine the field. That’s why UC San Francisco was among the first U.S. universities to combine AI and machine learning with medical imaging in research and education by opening its Center for Intelligent Imaging.

Take a look at how UCSF researchers are pioneering human-centered AI solutions to some of medicine’s biggest challenges.

Spot illnesses earlier

An A I - assisted scan of an xray showing a pneumothorax, where a red alert points to the potential collapsed lung
AI developed at UCSF is now helping radiologists around the world spot cases of collapsed lung after being licensed by GE Healthcare. Now featured in GE products, the technology helps flag potential cases to physicians via alerts like the one pictured. Image courtesy of GE Healthcare

Tens of thousands of Americans suffer pneumothoraces, a type of collapsed lung, annually. The condition is caused by trauma or lung disease – and serious cases can be deadly if diagnosed late or left untreated.

The problem:

This type of collapsed lung is difficult to identify: The illness can mimic others both in symptoms and in x-rays, in which only subtle clues may indicate its presence. Meanwhile, radiologists must interpret hundreds of images daily, and some hospitals do not have around-the-clock radiologists.

The solution:

UCSF researchers created the first AI bedside program to help flag potential cases to radiologists. In 2019, the tool was the first AI innovation of its kind to be licensed by the U.S. Food and Drug Administration. Today, it’s used in thousands of GE Healthcare machines around the world.

How did they do it?

Researchers from the Department of Radiology and Biomedical Imaging created a database of thousands of anonymous chest X-rays. Some of these images showed cases of collapsed lungs and others not. Next, researchers trained the AI tool on this database before testing it on thousands of other images to ensure it could flag potential cases accurately.

The AI screener works with portable X-ray machines, so doctors can use it right at a patient’s bedside without making major infrastructure investments.

“I think of this as an additional safety check that can deliver diagnoses and patient care sooner,” explained Associate Chair for Translational Informatics John Mongan, MD, PhD, who codeveloped the AI algorithm with Radiology Professor Andrew Taylor, MD, PhD. Mongan is also a director of the Center for Intelligent Imaging.

Boost image quality to diagnose traumatic brain injuries better

A comparison of two MRI scans of the same brain. The image on the left is an AI-enhanced scan, showing higher resolution.
The standard-quality MRI on the right has been boosted on the left using AI, putting the image on par with those taken by some of the most expensive and rarest MRI machines.

Magnetic Resonance Imaging (MRI) is particularly useful for studying the soft tissues that make up our livers, hearts and brains. Unlike X-rays, MRIs can produce finely detailed images of these organs and, in the case of the brain, help physicians detect tumors, subtle signs of strokes and changes over time.

The problem:

Most MRIs in the U.S. are performed with lower resolution 1.5T (Tesla) or 3T MRI systems that may miss signs and symptoms of conditions like multiple sclerosis and traumatic brain injury. Stronger 7T machines, which produce higher resolution images, could help, but their high cost is why only about 110 were in use globally as of 2022.

The solution: UCSF Assistant Professor of Neurology Reza Abbasi-Asl, PhD, led a team that used a form of AI to enhance the resolution of standard MRIs featuring traumatic brain injuries. The technique dramatically improved 3T MRI images, putting them roughly on par with 7T images, while outperforming other types of AI-enhanced MRIs.

These results may, one day, help improve care for those with traumatic brain injuries and other neurological conditions.

How did they do it?

Abbasi-Asl and team constructed small, anonymous databases of pairs of traumatic brain injury MRIs. Each pair contained MRIs of the same injury: one low-resolution, 3T version, and another high-resolution, 7T version. The team created machine learning models that connect bits of information based on data patterns to boost low-resolution images before comparing them to their high-resolution partners.

The outcomes of these models identified patterns and features that were hard to detect for the human eye in 3T MRIs, using them to understand how to improve image quality – boosting specific details while minimizing “noise” like grainy specks.

“Our findings highlight the promise of AI and machine learning to improve the quality of medical images captured by less advanced imaging systems,” Abbasi-Asl said.

Detect heart problems without invasive testing

Angiograms much like this could one day be used to diagnose more serious cardiac issues without further risky testing.

Coronary artery disease is among the leading causes of adult deaths worldwide. Caused by a build up of fatty deposits in the arteries, the disease is a common cause of heart attacks.

Physicians commonly use a test called a coronary angiogram to diagnose the condition. As part of angiograms, physicians inject a special dye into the main vessels that feed the heart to see how blood flows using x-rays.

The problem:

The heart’s left ventricle is the heart’s main pumping chamber but coronary artery disease can damage it. Patients with suspected severe coronary artery disease undergo angiograms, but may also need additional testing with even more dye that can harm the kidneys.

The solution:

New research by UCSF cardiologist Geoff Tison, MD, MPH, and team is among the first to successfully use machine learning to estimate how well the left ventricle is pumping by analyzing standard angiogram videos that are already obtained from the coronary angiogram procedure. This provides information about the heart’s function without requiring additional procedures or risk. The research could eventually give physicians and patients a quicker and less dangerous way to diagnose damage to the left ventricle.

How did they do it?

Tison and team trained a type of AI model called a deep neural network on anonymized angiogram videos recorded at UCSF. Deep neural networks are able to learn complex patterns in data such as images and videos, some of which are not readily apparent to humans.

The team’s model, dubbed CathEF, accurately predicted how well the left ventricle pumped when researchers compared the results to measurements of pump function taken from ultrasound. CathEF performed just as well when the team later tested it outside the lab, in a Canadian hospital.

“CathEF offers a novel approach that leverages data routinely collected during every angiogram to provide information that is not currently available to clinicians,” said Tison. “Our model effectively expands the utility of medical data with AI with real-time information to inform clinical decision-making.”

Monitor Parkinson’s Disease progression – with your phone?

These videos below show digitized data of a patient’s hand movements (left) and walking (right) that could one day offer physicians a better way to track Parkinson’s Disease’s progression.

As many as one million Americans live with Parkinson’s Disease, a degenerative nervous system disorder that affects movement, causing symptoms like tremors, stiffness and poor balance.

The problem:

To make the best treatment decisions, physicians need to understand how a patient’s symptoms progress. Currently, doctors struggle with a gap in this data, relying on patients’ accounts and observed changes between spread-out appointments to detect subtle changes in walking or the ability to tap a finger.

The solution:

Associate Neurology Professor Simon Little, MBBS, PhD, and Assistant Neurology Professor Reza Abbasi-Asl, PhD, used machine learning to build a system that could capture changes in patients’ gaits and hand movements from smartphone and digital camera recordings.

Although still in early development, the research could, in the future, allow physicians to monitor patients with a range of neurodegenerative diseases at home, providing more precise data for tailored treatments. It may also reveal new insights into how movement changes may predict the course of a disease.

How did they do it?

As part of their trial, the team recruited volunteers with Parkinson's Disease from the UCSF Movement Disorder and Neuromodulation Center. Using digital cameras, researchers filmed the participants walking and tapping their index finger, common clinical exam techniques. Machine learning programs processed the videos, identifying the most clinically relevant features like the speed of a finger tap that might indicate a more severe disease stage.

“We’ve kind of been conducting some areas of medicine, broadly, in the same way for the last 100 years: We see patients and talk to them. We do an examination in clinic and then we try and make an adjustment of some of their treatments,” Little explained. “We are at this transformational point, moving from old-fashioned, subjective views of patients to a digital transformation. I’m hopeful that, within five years, this type of approach will be more common in clinical practice.”