Real-world streams of data are characterised by the continuous occurrence of new and old classes, possibly on novel domains. Bayesian non-parametric mixture models provide a natural solution for continual learning due to their ability to create new components on the fly when new data are observed. However, popular class-based and time-based mixtures are often tested on simplified streams (e.g.class-incremental), where shortcuts can be exploited to infer drifts. We hypothesise that \emph{domain-based mixtures are more effective on natural streams}. Our proposed method, the CD-IMM, exemplifies this approach by learning an infinite mixture of domains for each class. We experiment on a natural scenario with a mix of class repetitions and novel domains to validate our hypothesis.
CD-IMM: The Benefits of Domain-based Mixture Models in Bayesian Continual Learning / antonio carta; daniele castellana. - ELETTRONICO. - 249:(2024), pp. 25-36. (Intervento presentato al convegno Continual Artificial Intelligence Unconference).
CD-IMM: The Benefits of Domain-based Mixture Models in Bayesian Continual Learning
daniele castellana
2024
Abstract
Real-world streams of data are characterised by the continuous occurrence of new and old classes, possibly on novel domains. Bayesian non-parametric mixture models provide a natural solution for continual learning due to their ability to create new components on the fly when new data are observed. However, popular class-based and time-based mixtures are often tested on simplified streams (e.g.class-incremental), where shortcuts can be exploited to infer drifts. We hypothesise that \emph{domain-based mixtures are more effective on natural streams}. Our proposed method, the CD-IMM, exemplifies this approach by learning an infinite mixture of domains for each class. We experiment on a natural scenario with a mix of class repetitions and novel domains to validate our hypothesis.I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.