TY - GEN
T1 - Learning semantics-enriched representation via self-discovery, self-classification, and self-restoration
AU - Haghighi, Fatemeh
AU - Hosseinzadeh Taher, Mohammad Reza
AU - Zhou, Zongwei
AU - Gotway, Michael B.
AU - Liang, Jianming
N1 - Funding Information:
Acknowledgments. This research has been supported partially by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and partially by the NIH under Award Number R01HL128785. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. This work has utilized the GPUs provided partially by the ASU Research Computing and partially by the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF) under grant number ACI-1548562. We thank Zuwei Guo for implementing Rubik’s cube and evaluating MedicalNet, M. M. Rahman Siddiquee for examining NiftyNet, and Jiaxuan Pang for evaluating I3D. The content of this paper is covered by patents pending.
Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - Medical images are naturally associated with rich semantics about the human anatomy, reflected in an abundance of recurring anatomical patterns, offering unique potential to foster deep semantic representation learning and yield semantically more powerful models for different medical applications. But how exactly such strong yet free semantics embedded in medical images can be harnessed for self-supervised learning remains largely unexplored. To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis. We examine our Semantic Genesis with all the publicly-available pre-trained models, by either self-supervision or fully supervision, on the six distinct target tasks, covering both classification and segmentation in various medical modalities (i.e., CT, MRI, and X-ray). Our extensive experiments demonstrate that Semantic Genesis significantly exceeds all of its 3D counterparts as well as the de facto ImageNet-based transfer learning in 2D. This performance is attributed to our novel self-supervised learning framework, encouraging deep models to learn compelling semantic representation from abundant anatomical patterns resulting from consistent anatomies embedded in medical images. Code and pre-trained Semantic Genesis are available at https://github.com/JLiangLab/SemanticGenesis.
AB - Medical images are naturally associated with rich semantics about the human anatomy, reflected in an abundance of recurring anatomical patterns, offering unique potential to foster deep semantic representation learning and yield semantically more powerful models for different medical applications. But how exactly such strong yet free semantics embedded in medical images can be harnessed for self-supervised learning remains largely unexplored. To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis. We examine our Semantic Genesis with all the publicly-available pre-trained models, by either self-supervision or fully supervision, on the six distinct target tasks, covering both classification and segmentation in various medical modalities (i.e., CT, MRI, and X-ray). Our extensive experiments demonstrate that Semantic Genesis significantly exceeds all of its 3D counterparts as well as the de facto ImageNet-based transfer learning in 2D. This performance is attributed to our novel self-supervised learning framework, encouraging deep models to learn compelling semantic representation from abundant anatomical patterns resulting from consistent anatomies embedded in medical images. Code and pre-trained Semantic Genesis are available at https://github.com/JLiangLab/SemanticGenesis.
KW - 3D model pre-training
KW - Self-supervised learning
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85093081368&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85093081368&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-59710-8_14
DO - 10.1007/978-3-030-59710-8_14
M3 - Conference contribution
AN - SCOPUS:85093081368
SN - 9783030597092
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 137
EP - 147
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 - 23rd International Conference, Proceedings
A2 - Martel, Anne L.
A2 - Abolmaesumi, Purang
A2 - Stoyanov, Danail
A2 - Mateus, Diana
A2 - Zuluaga, Maria A.
A2 - Zhou, S. Kevin
A2 - Racoceanu, Daniel
A2 - Joskowicz, Leo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020
Y2 - 4 October 2020 through 8 October 2020
ER -