A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in expectation) an ϵ-approximate minimizer of arbitrary order in at most inexact evaluations of the function and its derivatives, providing the first such result for general optimality orders. The impact of intrinsic noise limiting the validity of the assumptions is also discussed and it is shown that difficulties are unlikely to occur in the first-order version of the algorithm for sufficiently large gradients. Conversely, should these assumptions fail for specific realizations, then “degraded” optimality guarantees are shown to hold when failure occurs. These conclusions are then discussed and illustrated in the context of subsampling methods for finite-sum optimization.

Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques / Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, Philippe L. Toint. - In: EURO JOURNAL ON COMPUTATIONAL OPTIMIZATION. - ISSN 2192-4406. - STAMPA. - 10:(2022), pp. 100043.0-100043.0. [10.1016/j.ejco.2022.100043]

Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques

Stefania Bellavia
;
Gianmarco Gurioli;Benedetta Morini;
2022

Abstract

A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in expectation) an ϵ-approximate minimizer of arbitrary order in at most inexact evaluations of the function and its derivatives, providing the first such result for general optimality orders. The impact of intrinsic noise limiting the validity of the assumptions is also discussed and it is shown that difficulties are unlikely to occur in the first-order version of the algorithm for sufficiently large gradients. Conversely, should these assumptions fail for specific realizations, then “degraded” optimality guarantees are shown to hold when failure occurs. These conclusions are then discussed and illustrated in the context of subsampling methods for finite-sum optimization.
2022
10
0
0
Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, Philippe L. Toint
File in questo prodotto:
File Dimensione Formato  
EURO_BGMT5.pdf

accesso aperto

Tipologia: Pdf editoriale (Version of record)
Licenza: Open Access
Dimensione 616.8 kB
Formato Adobe PDF
616.8 kB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1286420
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 3
social impact