Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches. However, these methods are unable to acquire new knowledge incrementally – they are, in fact, mostly used only as a pre-training phase over IID data. In this work we investigate self-supervised methods in continual learning regimes without any replay mechanism. We show that naive functional regularization, also known as feature distillation, leads to lower plasticity and limits continual learning performance. Instead, we propose Projected Functional Regularization in which a separate temporal projection network ensures that the newly learned feature space preserves information of the previous one, while at the same time allowing for the learning of new features. This prevents forgetting while maintaining the plasticity of the learner. Comparison with other incremental learning approaches applied to self-supervision demonstrates that our method obtains competitive performance in different scenarios and on multiple datasets.

Continually Learning Self-Supervised Representations with Projected Functional Regularization / Gomez-Villa A.; Twardowski B.; Yu L.; Bagdanov A.D.; Van De Weijer J.. - STAMPA. - 2022-June:(2022), pp. 3866-3876. (Intervento presentato al convegno Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)) [10.1109/CVPRW56347.2022.00432].

Continually Learning Self-Supervised Representations with Projected Functional Regularization

Bagdanov A. D.;
2022

Abstract

Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches. However, these methods are unable to acquire new knowledge incrementally – they are, in fact, mostly used only as a pre-training phase over IID data. In this work we investigate self-supervised methods in continual learning regimes without any replay mechanism. We show that naive functional regularization, also known as feature distillation, leads to lower plasticity and limits continual learning performance. Instead, we propose Projected Functional Regularization in which a separate temporal projection network ensures that the newly learned feature space preserves information of the previous one, while at the same time allowing for the learning of new features. This prevents forgetting while maintaining the plasticity of the learner. Comparison with other incremental learning approaches applied to self-supervision demonstrates that our method obtains competitive performance in different scenarios and on multiple datasets.
2022
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Gomez-Villa A.; Twardowski B.; Yu L.; Bagdanov A.D.; Van De Weijer J.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1288106
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 0
social impact