Adversarial Continuous Learning in Unsupervised Domain Adaptation

Youshan Zhang and Brian D. Davison

Full Paper (15 pages)
Author's version: PDF (1.3MB)

Abstract
Domain adaptation has emerged as a crucial technique to address the problem of domain shift, which exists when applying an existing model to a new population of data. Adversarial learning has made impressive progress in learning a domain invariant representation via building bridges between two domains. However, existing adversarial learning methods tend to only employ a domain discriminator or generate adversarial examples that affect the original domain distribution. Moreover, little work has considered confident continuous learning using an existing source classifier for domain adaptation. In this paper, we develop adversarial continuous learning in a unified deep architecture. We also propose a novel correlated loss to minimize the discrepancy between the source and target domain. Our model increases robustness by incorporating high-confidence samples from the target domain. The transfer loss jointly considers the original source image and transfer examples in the target domain. Extensive experiments demonstrate significant improvements in classification accuracy over the state of the art.

In Pattern Recognition. Proceedings of ICPR International Workshops and Challenges, Part II, LNCS 12662, pages 672-687, Springer. Presented at The Third International Workshop on Deep Learning for Pattern Recognition (DLPR20), held in conjunction with ICPR 2020, January 2021.

Back to Brian Davison's publications


Last modified: 5 March 2021
Brian D. Davison