We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. Our formulation of the reverse-mode procedure is linked to previous work by Maclaurin et al (2015) but does not require reversible dynamics. Additionally, we explore the use of constraints on the hyperparameters. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speedup hyperparameter optimization on large datasets. We present a series of experiments on image and phone classification tasks. In the second task, previous gradient-based approaches are prohibitive. We show that our real-time algorithm yields state-of-the-art results in affordable time.

Forward and Reverse Gradient-Based Hyperparameter Optimization / Luca, Franceschi; Michele, Donini; Paolo, Frasconi; Massimiliano, Pontil. - STAMPA. - (2017), pp. 1165-1173. (Intervento presentato al convegno 34th International Conference on Machine Learning tenutosi a Sydney, Australia nel 2017).

Forward and Reverse Gradient-Based Hyperparameter Optimization

Paolo Frasconi;PONTIL, MASSIMILIANO
2017

Abstract

We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. Our formulation of the reverse-mode procedure is linked to previous work by Maclaurin et al (2015) but does not require reversible dynamics. Additionally, we explore the use of constraints on the hyperparameters. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speedup hyperparameter optimization on large datasets. We present a series of experiments on image and phone classification tasks. In the second task, previous gradient-based approaches are prohibitive. We show that our real-time algorithm yields state-of-the-art results in affordable time.
2017
Proceedings of the 34th International Conference on Machine Learning
34th International Conference on Machine Learning
Sydney, Australia
2017
Luca, Franceschi; Michele, Donini; Paolo, Frasconi; Massimiliano, Pontil
File in questo prodotto:
File Dimensione Formato  
franceschi17a.pdf

accesso aperto

Descrizione: Articolo principale
Tipologia: Pdf editoriale (Version of record)
Licenza: Tutti i diritti riservati
Dimensione 404.86 kB
Formato Adobe PDF
404.86 kB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1105577
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 89
  • ???jsp.display-item.citation.isi??? ND
social impact