• Author(s): Yonghao Xu, Pedram Ghamisi, Yannis Avrithis

The problem of multi-target unsupervised domain adaptation (UDA) in semantic segmentation aims to develop a unified model capable of addressing the domain shift between multiple target domains. This challenge has been recently introduced in cross-domain semantic segmentation due to the difficulty of obtaining annotations for dense predictions. Existing solutions typically require labeled data from the source domain and unlabeled data from multiple target domains during training, collectively referred to as “external” data.

However, when faced with new unlabeled data from an unseen target domain, these solutions either fail to generalize well or require retraining from scratch on all data. To address these challenges, a new strategy called “multi-target UDA without external data” is proposed for semantic segmentation. This approach involves initially training the segmentation model on external data and then adapting it to a new unseen target domain without accessing any external data.

This strategy is more scalable than existing solutions and remains applicable when external data is inaccessible. The method incorporates self-distillation and adversarial learning, where knowledge acquired from the external data is preserved during adaptation through “one-way” adversarial learning. Extensive experiments on four benchmark urban driving datasets demonstrate that this strategy significantly outperforms current state-of-the-art solutions, even in the absence of external data.