We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. The reverse-mode procedure extends previous work by Maclaurin et al. (2015) and offers the opportunity to insert constraints on the hyperparameters in a natural way. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speedup the overall hyperparameter optimization process.
On hyperparameter optimization in learning systems / Franceschi L.; Donini M.; Frasconi P.; Pontil M.. - ELETTRONICO. - (2019), pp. 0-0. ( 5th International Conference on Learning Representations, ICLR 2017 fra 2017).
On hyperparameter optimization in learning systems
Frasconi P.;Pontil M.
2019
Abstract
We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. The reverse-mode procedure extends previous work by Maclaurin et al. (2015) and offers the opportunity to insert constraints on the hyperparameters in a natural way. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speedup the overall hyperparameter optimization process.I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



