Truncated Newton (TN) methods have been a useful technique for large-scale optimization. Instead of obtaining the full Newton direction, a truncated method approximately solves the Newton equation with an inner conjugate gradient (CG) procedure (TNCG for the whole method). These methods have been employed to efficiently solve linear classification problems. However, even in this deeply studied field, various theoretical and numerical aspects were not completely explored. The first contribution of this work is to comprehensively study the global and local convergence when TNCG is applied to linear classification. Because of the lack of twice differentiability under some losses, many past works cannot be applied here. We prove various missing pieces of theory from scratch and clarify many proper references. The second contribution is to study the termination of the CG method. For the first time when TNCG is applied to linear classification, we show that the inner stopping condition strongly affects the convergence speed. We propose using a quadratic stopping criterion to achieve both robustness and efficiency. The third contribution is that of combining the study on inner stopping criteria with that of preconditioning. We discuss how convergence theory is affected by preconditioning and finally propose an effective preconditioned TNCG.

A Study on Truncated Newton Methods for Linear Classification / leonardo galli; chih-jen lin. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. - ISSN 2162-2388. - ELETTRONICO. - -:(2021), pp. 1-14. [10.1109/TNNLS.2020.3045836]

A Study on Truncated Newton Methods for Linear Classification

leonardo galli
Writing – Original Draft Preparation
;
2021

Abstract

Truncated Newton (TN) methods have been a useful technique for large-scale optimization. Instead of obtaining the full Newton direction, a truncated method approximately solves the Newton equation with an inner conjugate gradient (CG) procedure (TNCG for the whole method). These methods have been employed to efficiently solve linear classification problems. However, even in this deeply studied field, various theoretical and numerical aspects were not completely explored. The first contribution of this work is to comprehensively study the global and local convergence when TNCG is applied to linear classification. Because of the lack of twice differentiability under some losses, many past works cannot be applied here. We prove various missing pieces of theory from scratch and clarify many proper references. The second contribution is to study the termination of the CG method. For the first time when TNCG is applied to linear classification, we show that the inner stopping condition strongly affects the convergence speed. We propose using a quadratic stopping criterion to achieve both robustness and efficiency. The third contribution is that of combining the study on inner stopping criteria with that of preconditioning. We discuss how convergence theory is affected by preconditioning and finally propose an effective preconditioned TNCG.
2021
-
1
14
Goal 9: Industry, Innovation, and Infrastructure
leonardo galli; chih-jen lin
File in questo prodotto:
File Dimensione Formato  
galli21a.pdf

Accesso chiuso

Descrizione: Articolo principale
Tipologia: Versione finale referata (Postprint, Accepted manuscript)
Licenza: Tutti i diritti riservati
Dimensione 1.52 MB
Formato Adobe PDF
1.52 MB Adobe PDF   Richiedi una copia
tncg_supplement.pdf

accesso aperto

Descrizione: Supplementary Materials
Tipologia: Altro
Licenza: Open Access
Dimensione 4.77 MB
Formato Adobe PDF
4.77 MB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1221395
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 12
social impact