mDALU: Multi-Source Domain Adaptation and Label Unification with Partial Datasets

Abstract

One challenge of object recognition is to generalize to new domains, to more classes and/or to new modalities. This necessitates methods to combine and reuse existing datasets that may belong to different domains, have partial annotations, and/or have different data modalities. This paper formulates this as a multi-source domain adaptation and label unification problem, and proposes a novel method for it. Our method consists of a partially-supervised adaptation stage and a fully-supervised adaptation stage. In the former, partial knowledge is transferred from multiple source domains to the target domain and fused therein. Negative transfer between unmatching label spaces is mitigated via three new modules: domain attention, uncertainty maximization and attention-guided adversarial alignment. In the latter, knowledge is transferred in the unified label space after a label completion process with pseudo-labels. Extensive experiments on three different tasks – image classification, 2D semantic image segmentation, and joint 2D-3D semantic segmentation – show that our method outperforms all competing methods significantly.

Publication

“mDALU: Multi-Source Domain Adaptation and Label Unification with Partial Datasets”
Rui GongDengxin DaiYuhua ChenWen LiLuc Van Gool
International Conference on Computer Vision, 2021

[Paper] [BibTex]

@InProceedings{Gong_2021_ICCV,

author = {Gong, Rui and Dai, Dengxin and Chen, Yuhua and Li, Wen and Van Gool, Luc},

title = {mDALU: Multi-Source Domain Adaptation and Label Unification With Partial Datasets},

booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},

month = {October}, year = {2021},

pages = {8876-8885}

}