An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization.

TitleAn interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization.
Publication TypeJournal Article
Year of Publication2021
AuthorsShen Y, Wu N, Phang J, Park J, Liu K, Tyagi S, Heacock L, S Kim G, Moy L, Cho K, Geras KJ
JournalMed Image Anal
Volume68
Pagination101908
Date Published2021 02
ISSN1361-8423
KeywordsBreast, Breast Neoplasms, Early Detection of Cancer, Female, Humans, Mammography, Neural Networks, Computer
Abstract

Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.

DOI10.1016/j.media.2020.101908
Alternate JournalMed Image Anal
PubMed ID33383334
PubMed Central IDPMC7828643
Grant ListP41 EB017183 / EB / NIBIB NIH HHS / United States
R21 CA225175 / CA / NCI NIH HHS / United States
Related Institute: 
MRI Research Institute (MRIRI)

Weill Cornell Medicine
Department of Radiology
525 East 68th Street New York, NY 10065