TY - GEN
T1 - Discriminative, Restorative, and Adversarial Learning
T2 - 4th MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2022, held in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022
AU - Guo, Zuwei
AU - Islam, Nahid Ui
AU - Gotway, Michael B.
AU - Liang, Jianming
N1 - Funding Information:
Acknowledgments. We thank F. Haghighi, M. R. Hosseinzadeh Taher, and Z. Zhou for their discussions, debates, and supports in implementing the earlier ideas behind “United & Unified” and in drafting earlier versions. This research has been supported in part by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and in part by the NIH under Award Number R01HL128785. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. This work has utilized the GPUs provided in part by the ASU Research Computing and in part by the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF) under grant numbers: ACI-1548562, ACI-1928147, and ACI-2005632. The content of this paper is covered by patents pending.
Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Uniting three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning) enables collaborative representation learning and yields three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this advantage, we have redesigned five prominent SSL methods, including Rotation, Jigsaw, Rubik’s Cube, Deep Clustering, and TransVW, and formulated each in a United framework for 3D medical imaging. However, such a United framework increases model complexity and pretraining difficulty. To overcome this difficulty, we develop a stepwise incremental pretraining strategy, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning, and finally, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models training, resulting in significant performance gains and annotation cost reduction via transfer learning for five target tasks, encompassing both classification and segmentation, across diseases, organs, datasets, and modalities. This performance is attributed to the synergy of the three SSL ingredients in our United framework unleashed via stepwise incremental pretraining. All codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.
AB - Uniting three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning) enables collaborative representation learning and yields three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this advantage, we have redesigned five prominent SSL methods, including Rotation, Jigsaw, Rubik’s Cube, Deep Clustering, and TransVW, and formulated each in a United framework for 3D medical imaging. However, such a United framework increases model complexity and pretraining difficulty. To overcome this difficulty, we develop a stepwise incremental pretraining strategy, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning, and finally, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models training, resulting in significant performance gains and annotation cost reduction via transfer learning for five target tasks, encompassing both classification and segmentation, across diseases, organs, datasets, and modalities. This performance is attributed to the synergy of the three SSL ingredients in our United framework unleashed via stepwise incremental pretraining. All codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.
KW - Adversarial learning
KW - Discriminative learning
KW - Restorative learning
KW - Self-supervised learning
KW - Stepwise pretraining
KW - United framework
UR - http://www.scopus.com/inward/record.url?scp=85140440132&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85140440132&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-16852-9_7
DO - 10.1007/978-3-031-16852-9_7
M3 - Conference contribution
AN - SCOPUS:85140440132
SN - 9783031168512
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 66
EP - 76
BT - Domain Adaptation and Representation Transfer - 4th MICCAI Workshop, DART 2022, Held in Conjunction with MICCAI 2022, Proceedings
A2 - Kamnitsas, Konstantinos
A2 - Koch, Lisa
A2 - Islam, Mobarakol
A2 - Xu, Ziyue
A2 - Cardoso, Jorge
A2 - Dou, Qi
A2 - Rieke, Nicola
A2 - Tsaftaris, Sotirios
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 22 September 2022 through 22 September 2022
ER -