Deep multimodal fusion for semantic image segmentation: A survey

Affiliation auteurs!!!! Error affiliation !!!!
TitreDeep multimodal fusion for semantic image segmentation: A survey
Type de publicationJournal Article
Year of Publication2021
AuteursZhang Y, Sidibe D, Morel O, Meriaudeau F
JournalIMAGE AND VISION COMPUTING
Volume105
Pagination104042
Date PublishedJAN
Type of ArticleReview
ISSN0262-8856
Mots-clésDeep learning, Image Fusion, Multi-modal, Semantic segmentation
Résumé

Recent advances in deep learning have shown excellent performance in various scene understanding tasks. However, in some complex environments or under challenging conditions, it is necessary to employ multiple modalities that provide complementary information on the same scene. A variety of studies have demonstrated that deep multimodal fusion for semantic image segmentation achieves significant performance improvement. These fusion approaches take the benefits of multiple information sources and generate an optimal joint prediction automatically. This paper describes the essential background concepts of deep multimodal fusion and the relevant applications in computer vision. In particular, we provide a systematic survey of multimodal fusion methodologies, multimodal segmentation datasets, and quantitative evaluations on the benchmark datasets. Existing fusion methods are summarized according to a common taxonomy: early fusion, late fusion, and hybrid fusion. Based on their performance, we analyze the strengths and weaknesses of different fusion strategies. Current challenges and design choices are discussed, aiming to provide the reader with a comprehensive and heuristic view of deep multimodal image segmentation. (C) 2020 Elsevier B.V. All rights reserved.

DOI10.1016/j.imavis.2020.104042