Modern day medical imaging exams have become a critical diagnostic tool for conditions of all kinds – from detecting the earliest breast cancers, long before a tumor could grow large enough for a woman to feel a lump in her own body, to finding malformations in the hearts of tiny babies months before they’re ready to be born. The instruments developed to look inside the body to capture these images become more powerful by the day. “A patient can walk in and in just a few minutes, generate a gigabyte worth of data,” says James Gee, PhD, an associate professor of Radiologic Science and Computer and Information Science, who directs the Penn Image Computing and Science Laboratory (PICSL).
But the increasingly detailed pictures typically still require a human being – a radiologist, often with specialized additional training in areas like neuroradiology, musculoskeletal or cardiothoracic imaging – to examine the images, tease out the answers they hold, and use the information to arrive at a diagnosis, which will ultimately be used to shape the treatment plan.
This process can be painstaking: Mapping out the locations of all the different parts of a patient’s brain captured via magnetic resonance imaging, for instance, might take even an experienced clinician nearly a whole day to complete. Using a new computer algorithm developed by medical imaging researchers at Penn Medicine, however, that labeling process happens automatically, taking “zero time.” And perhaps more importantly, the computer’s answers are extraordinarily accurate, Gee says. If he showed unmarked images of a brain that was labeled manually versus one that was segmented automatically by the new algorithm – as shown in the image above -- and presented them to experts, they would “hard-pressed to pick which was done by the human.”
The new algorithm, which automatically finds and labels anatomical strictures in MRI scans, won first place in a Grand Challenge competition held during the recent International Conference on Medical Image Computing and Computer Assisted Intervention. Authors of the submission include Paul Yushkevich, PhD, Hongzhi Wang, PhD, and Brian Avants, PhD. In addition to the team’s win, more than a third of the entries from teams across the world were built using open-source image registration software developed in the PICSL, which won first place in a previous MICAAI competition.
The technology the algorithm is based on – homegrown at Penn beginning in the late 1980s [a1] -- is already widely in use at Penn Medicine in clinical research. Trials comparing changes in the brains of patients with Alzheimer’s disease to those of normal control subjects are one example. In the future, Gee said he hopes the time saved on manual labeling of radiological images of all kinds – across all types of imaging technologies, from those generated during obstetrical ultrasounds to cardiac CT scans – will lead to expedited diagnoses and quicker treatment for patients.
In time, he envisions that actionable diagnostic information will be obtained much quicker, making some of the delays associated making diagnoses a thing of the past. “Imagine taking someone’s blood pressure or temperature and it being some complicated process that took a long time, instead of getting just a number instantly,” Gee says. “Technology like ours is the wave of the future in medical imaging.”