Distilling Knowledge From an Ensemble of Vision Transformers for Improved Classification of Breast Ultrasound.

TitleDistilling Knowledge From an Ensemble of Vision Transformers for Improved Classification of Breast Ultrasound.
Publication TypeJournal Article
Year of Publication2023
AuthorsZhou G, Mosadegh B
JournalAcad Radiol
Date Published2023 Sep 02
ISSN1878-4046
Abstract

RATIONALE AND OBJECTIVES: To develop a deep learning model for the automated classification of breast ultrasound images as benign or malignant. More specifically, the application of vision transformers, ensemble learning, and knowledge distillation is explored for breast ultrasound classification.

MATERIALS AND METHODS: Single view, B-mode ultrasound images were curated from the publicly available Breast Ultrasound Image (BUSI) dataset, which has categorical ground truth labels (benign vs malignant) assigned by radiologists and malignant cases confirmed by biopsy. The performance of vision transformers (ViT) is compared to convolutional neural networks (CNN), followed by a comparison between supervised, self-supervised, and randomly initialized ViT. Subsequently, the ensemble of 10 independently trained ViT, where the ensemble model is the unweighted average of the output of each individual model is compared to the performance of each ViT alone. Finally, we train a single ViT to emulate the ensembled ViT using knowledge distillation.

RESULTS: On this dataset that was trained using five-fold cross validation, ViT outperforms CNN, while self-supervised ViT outperform supervised and randomly initialized ViT. The ensemble model achieves an area under the receiver operating characteristics curve (AuROC) and area under the precision recall curve (AuPRC) of 0.977 and 0.965 on the test set, outperforming the average AuROC and AuPRC of the independently trained ViTs (0.958 ± 0.05 and 0.931 ± 0.016). The distilled ViT achieves an AuROC and AuPRC of 0.972 and 0.960.

CONCLUSION: Both transfer learning and ensemble learning can each offer increased performance independently and can be sequentially combined to collectively improve the performance of the final model. Furthermore, a single vision transformer can be trained to match the performance of an ensemble of a set of vision transformers using knowledge distillation.

DOI10.1016/j.acra.2023.08.006
Alternate JournalAcad Radiol
PubMed ID37666747
Related Institute: 
Dalio Institute of Cardiovascular Imaging (Dalio ICI)

Weill Cornell Medicine
Department of Radiology
525 East 68th Street New York, NY 10065