A Geometric Framework for Feature Mappings in Multimodal Fusion of Brain Image Data

Wen Zhang, Liang Mi, Paul M. Thompson, Yalin Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Fusing multimodal brain image features to empower statistical analysis has attracted considerable research interest. Generally, a feature mapping is learned in the fusion process so the cross-modality relationship in the multimodal data can be more effectively extracted in a common feature space. Most of the prior work achieve this goal by data-driven approaches without considering the geometry properties of the feature spaces where the data are embedded. It results in a huge sacrifice of untapped information. Here, we propose to fuse the multimodal brain images through a novel geometric approach. The key idea is to encode various brain image features with the local metric change on brain shapes, such that the feature mapping can be efficiently solved by some geometric mapping functions, i.e., quasiconformal and harmonic mappings. We approach our multimodal fusion framework (MFRM) in two steps: surface feature mapping and volumetric feature mapping. For each of them, we design an informative Riemannian metric based on distinct brain anatomical features and achieve image fusion via diffeomorphic maps. We evaluate our proposed method on two brain image cohorts. The experimental results reveal the effectiveness of our proposed framework which yields better statistical performances than state-of-the-art data-driven methods.

Original languageEnglish (US)
Title of host publicationInformation Processing in Medical Imaging - 26th International Conference, IPMI 2019, Proceedings
EditorsSiqi Bao, Albert C.S. Chung, James C. Gee, Paul A. Yushkevich
PublisherSpringer Verlag
Pages617-630
Number of pages14
ISBN (Print)9783030203504
DOIs
StatePublished - Jan 1 2019
Event26th International Conference on Information Processing in Medical Imaging, IPMI 2019 - Hong Kong, China
Duration: Jun 2 2019Jun 7 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11492 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference26th International Conference on Information Processing in Medical Imaging, IPMI 2019
CountryChina
CityHong Kong
Period6/2/196/7/19

Fingerprint

Brain
Fusion
Feature Space
Data-driven
Harmonic Mapping
Quasiconformal Mapping
Image fusion
Image Fusion
Geometric Approach
Electric fuses
Riemannian Metric
Modality
Statistical Analysis
Framework
Statistical methods
Distinct
Metric
Geometry
Evaluate
Experimental Results

Keywords

  • diffusion MRI
  • Harmonic mapping
  • Multimodal fusion
  • Quasiconformal mapping
  • Riemannian metric
  • structural MRI

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Zhang, W., Mi, L., Thompson, P. M., & Wang, Y. (2019). A Geometric Framework for Feature Mappings in Multimodal Fusion of Brain Image Data. In S. Bao, A. C. S. Chung, J. C. Gee, & P. A. Yushkevich (Eds.), Information Processing in Medical Imaging - 26th International Conference, IPMI 2019, Proceedings (pp. 617-630). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11492 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-030-20351-1_48

A Geometric Framework for Feature Mappings in Multimodal Fusion of Brain Image Data. / Zhang, Wen; Mi, Liang; Thompson, Paul M.; Wang, Yalin.

Information Processing in Medical Imaging - 26th International Conference, IPMI 2019, Proceedings. ed. / Siqi Bao; Albert C.S. Chung; James C. Gee; Paul A. Yushkevich. Springer Verlag, 2019. p. 617-630 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11492 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zhang, W, Mi, L, Thompson, PM & Wang, Y 2019, A Geometric Framework for Feature Mappings in Multimodal Fusion of Brain Image Data. in S Bao, ACS Chung, JC Gee & PA Yushkevich (eds), Information Processing in Medical Imaging - 26th International Conference, IPMI 2019, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11492 LNCS, Springer Verlag, pp. 617-630, 26th International Conference on Information Processing in Medical Imaging, IPMI 2019, Hong Kong, China, 6/2/19. https://doi.org/10.1007/978-3-030-20351-1_48
Zhang W, Mi L, Thompson PM, Wang Y. A Geometric Framework for Feature Mappings in Multimodal Fusion of Brain Image Data. In Bao S, Chung ACS, Gee JC, Yushkevich PA, editors, Information Processing in Medical Imaging - 26th International Conference, IPMI 2019, Proceedings. Springer Verlag. 2019. p. 617-630. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-20351-1_48
Zhang, Wen ; Mi, Liang ; Thompson, Paul M. ; Wang, Yalin. / A Geometric Framework for Feature Mappings in Multimodal Fusion of Brain Image Data. Information Processing in Medical Imaging - 26th International Conference, IPMI 2019, Proceedings. editor / Siqi Bao ; Albert C.S. Chung ; James C. Gee ; Paul A. Yushkevich. Springer Verlag, 2019. pp. 617-630 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{f95c2474531349f7946de985b9cad33a,
title = "A Geometric Framework for Feature Mappings in Multimodal Fusion of Brain Image Data",
abstract = "Fusing multimodal brain image features to empower statistical analysis has attracted considerable research interest. Generally, a feature mapping is learned in the fusion process so the cross-modality relationship in the multimodal data can be more effectively extracted in a common feature space. Most of the prior work achieve this goal by data-driven approaches without considering the geometry properties of the feature spaces where the data are embedded. It results in a huge sacrifice of untapped information. Here, we propose to fuse the multimodal brain images through a novel geometric approach. The key idea is to encode various brain image features with the local metric change on brain shapes, such that the feature mapping can be efficiently solved by some geometric mapping functions, i.e., quasiconformal and harmonic mappings. We approach our multimodal fusion framework (MFRM) in two steps: surface feature mapping and volumetric feature mapping. For each of them, we design an informative Riemannian metric based on distinct brain anatomical features and achieve image fusion via diffeomorphic maps. We evaluate our proposed method on two brain image cohorts. The experimental results reveal the effectiveness of our proposed framework which yields better statistical performances than state-of-the-art data-driven methods.",
keywords = "diffusion MRI, Harmonic mapping, Multimodal fusion, Quasiconformal mapping, Riemannian metric, structural MRI",
author = "Wen Zhang and Liang Mi and Thompson, {Paul M.} and Yalin Wang",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/978-3-030-20351-1_48",
language = "English (US)",
isbn = "9783030203504",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "617--630",
editor = "Siqi Bao and Chung, {Albert C.S.} and Gee, {James C.} and Yushkevich, {Paul A.}",
booktitle = "Information Processing in Medical Imaging - 26th International Conference, IPMI 2019, Proceedings",

}

TY - GEN

T1 - A Geometric Framework for Feature Mappings in Multimodal Fusion of Brain Image Data

AU - Zhang, Wen

AU - Mi, Liang

AU - Thompson, Paul M.

AU - Wang, Yalin

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Fusing multimodal brain image features to empower statistical analysis has attracted considerable research interest. Generally, a feature mapping is learned in the fusion process so the cross-modality relationship in the multimodal data can be more effectively extracted in a common feature space. Most of the prior work achieve this goal by data-driven approaches without considering the geometry properties of the feature spaces where the data are embedded. It results in a huge sacrifice of untapped information. Here, we propose to fuse the multimodal brain images through a novel geometric approach. The key idea is to encode various brain image features with the local metric change on brain shapes, such that the feature mapping can be efficiently solved by some geometric mapping functions, i.e., quasiconformal and harmonic mappings. We approach our multimodal fusion framework (MFRM) in two steps: surface feature mapping and volumetric feature mapping. For each of them, we design an informative Riemannian metric based on distinct brain anatomical features and achieve image fusion via diffeomorphic maps. We evaluate our proposed method on two brain image cohorts. The experimental results reveal the effectiveness of our proposed framework which yields better statistical performances than state-of-the-art data-driven methods.

AB - Fusing multimodal brain image features to empower statistical analysis has attracted considerable research interest. Generally, a feature mapping is learned in the fusion process so the cross-modality relationship in the multimodal data can be more effectively extracted in a common feature space. Most of the prior work achieve this goal by data-driven approaches without considering the geometry properties of the feature spaces where the data are embedded. It results in a huge sacrifice of untapped information. Here, we propose to fuse the multimodal brain images through a novel geometric approach. The key idea is to encode various brain image features with the local metric change on brain shapes, such that the feature mapping can be efficiently solved by some geometric mapping functions, i.e., quasiconformal and harmonic mappings. We approach our multimodal fusion framework (MFRM) in two steps: surface feature mapping and volumetric feature mapping. For each of them, we design an informative Riemannian metric based on distinct brain anatomical features and achieve image fusion via diffeomorphic maps. We evaluate our proposed method on two brain image cohorts. The experimental results reveal the effectiveness of our proposed framework which yields better statistical performances than state-of-the-art data-driven methods.

KW - diffusion MRI

KW - Harmonic mapping

KW - Multimodal fusion

KW - Quasiconformal mapping

KW - Riemannian metric

KW - structural MRI

UR - http://www.scopus.com/inward/record.url?scp=85066142240&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85066142240&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-20351-1_48

DO - 10.1007/978-3-030-20351-1_48

M3 - Conference contribution

AN - SCOPUS:85066142240

SN - 9783030203504

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 617

EP - 630

BT - Information Processing in Medical Imaging - 26th International Conference, IPMI 2019, Proceedings

A2 - Bao, Siqi

A2 - Chung, Albert C.S.

A2 - Gee, James C.

A2 - Yushkevich, Paul A.

PB - Springer Verlag

ER -