Research

Deep learning (DL) models are quickly achieving state-of-the-art performance in a variety of medical imaging applications. One major weakness of these DL models is that they struggle to generalize to new information. This problem is exacerbated in the medical field by a large amount of heterogeneity in the data and small datasets. This problem became apparent to our lab when a DL-based liver segmentation model achieved excellent testing performance, but completely failed on patients with unique attributes (metal artifacts and rare diseases). The main purpose of my research is to build safeguards to detect when scans with “out-of-distribution” (or unique) attributes are passed to deployed DL models. To build these safeguards, I am investigating using a generative adversarial network to model the in-distribution (or normal attributes).

Please see the “Current Research” tab for my publications on this topic.