TY - GEN
T1 - A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis
AU - Hosseinzadeh Taher, Mohammad Reza
AU - Haghighi, Fatemeh
AU - Feng, Ruibin
AU - Gotway, Michael B.
AU - Liang, Jianming
N1 - Funding Information:
Acknowledgments. This research has been supported partially by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and partially by the NIH under Award Number R01HL128785. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. This work has utilized the GPUs provided partially by the ASU Research Computing and partially by the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF) under grant number ACI-1548562. We thank Nahid Islam for evaluating the self-supervised methods on the PE detection target task. The content of this paper is covered by patents pending.
Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Transfer learning from supervised ImageNet models has been frequently used in medical image analysis. Yet, no large-scale evaluation has been conducted to benchmark the efficacy of newly-developed pre-training techniques for medical image analysis, leaving several important questions unanswered. As the first step in this direction, we conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks in comparison with the supervised ImageNet model. Furthermore, we present a practical approach to bridge the domain gap between natural and medical images by continually (pre-)training supervised ImageNet models on medical images. Our comprehensive evaluation yields new insights: (1) pre-trained models on fine-grained data yield distinctive local representations that are more suitable for medical segmentation tasks, (2) self-supervised ImageNet models learn holistic features more effectively than supervised ImageNet models, and (3) continual pre-training can bridge the domain gap between natural and medical images. We hope that this large-scale open evaluation of transfer learning can direct the future research of deep learning for medical imaging. As open science, all codes and pre-trained models are available on our GitHub page https://github.com/JLiangLab/BenchmarkTransferLearning.
AB - Transfer learning from supervised ImageNet models has been frequently used in medical image analysis. Yet, no large-scale evaluation has been conducted to benchmark the efficacy of newly-developed pre-training techniques for medical image analysis, leaving several important questions unanswered. As the first step in this direction, we conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks in comparison with the supervised ImageNet model. Furthermore, we present a practical approach to bridge the domain gap between natural and medical images by continually (pre-)training supervised ImageNet models on medical images. Our comprehensive evaluation yields new insights: (1) pre-trained models on fine-grained data yield distinctive local representations that are more suitable for medical segmentation tasks, (2) self-supervised ImageNet models learn holistic features more effectively than supervised ImageNet models, and (3) continual pre-training can bridge the domain gap between natural and medical images. We hope that this large-scale open evaluation of transfer learning can direct the future research of deep learning for medical imaging. As open science, all codes and pre-trained models are available on our GitHub page https://github.com/JLiangLab/BenchmarkTransferLearning.
KW - ImageNet pre-training
KW - Self-supervised learning
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85116411321&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85116411321&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-87722-4_1
DO - 10.1007/978-3-030-87722-4_1
M3 - Conference contribution
AN - SCOPUS:85116411321
SN - 9783030877217
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 3
EP - 13
BT - Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health - 3rd MICCAI Workshop, DART 2021, and 1st MICCAI Workshop, FAIR 2021, Held in Conjunction with MICCAI 2021, Proceedings
A2 - Albarqouni, Shadi
A2 - Cardoso, M. Jorge
A2 - Dou, Qi
A2 - Kamnitsas, Konstantinos
A2 - Khanal, Bishesh
A2 - Rekik, Islem
A2 - Rieke, Nicola
A2 - Sheet, Debdoot
A2 - Tsaftaris, Sotirios
A2 - Xu, Daguang
A2 - Xu, Ziyue
PB - Springer Science and Business Media Deutschland GmbH
T2 - 3rd MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2021, and the 1st MICCAI Workshop on Affordable Healthcare and AI for Resource Diverse Global Health, FAIR 2021, held in conjunction with 24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021
Y2 - 27 September 2021 through 1 October 2021
ER -