We consider the problem of minimizing a continuously differentiable function of several variables subject to smooth nonlinear constraints. We assume that the first order derivatives of the objective function and of the constraints can be neither calculated nor explicitly approximated. Hence, every minimization procedure must use only a suitable sampling of the problem functions. These problems arise in many industrial and scientific applications, and this motivates the increasing interest in studying derivative-free methods for their solution. The aim of the paper is to extend to a derivative-free context a sequential penalty approach for nonlinear programming. This approach consists in solving the original problem by a sequence of approximate minimizations of a merit function where penalization of constraint violation is progressively increased. In particular, under some standard assumptions, we introduce a general theoretical result regarding the connections between the sampling technique and the updating of the penalization which are able to guarantee convergence to stationary points of the constrained problem. On the basis of the general theoretical result, we propose a new method and prove its convergence to stationary points of the constrained problem. The computational behavior of the method has been evaluated both on a set of test problems and on a real application. The obtained results and the comparison with other well-known derivative-free software show the viability of the proposed sequential penalty approach.
SEQUENTIAL PENALTY DERIVATIVE-FREE METHODS FORNONLINEAR CONSTRAINED OPTIMIZATION / G. Liuzzi; S. Lucidi; M. Sciandrone. - In: SIAM JOURNAL ON OPTIMIZATION. - ISSN 1052-6234. - STAMPA. - 20:(2010), pp. 2614-2635.
SEQUENTIAL PENALTY DERIVATIVE-FREE METHODS FORNONLINEAR CONSTRAINED OPTIMIZATION
SCIANDRONE, MARCO
2010
Abstract
We consider the problem of minimizing a continuously differentiable function of several variables subject to smooth nonlinear constraints. We assume that the first order derivatives of the objective function and of the constraints can be neither calculated nor explicitly approximated. Hence, every minimization procedure must use only a suitable sampling of the problem functions. These problems arise in many industrial and scientific applications, and this motivates the increasing interest in studying derivative-free methods for their solution. The aim of the paper is to extend to a derivative-free context a sequential penalty approach for nonlinear programming. This approach consists in solving the original problem by a sequence of approximate minimizations of a merit function where penalization of constraint violation is progressively increased. In particular, under some standard assumptions, we introduce a general theoretical result regarding the connections between the sampling technique and the updating of the penalization which are able to guarantee convergence to stationary points of the constrained problem. On the basis of the general theoretical result, we propose a new method and prove its convergence to stationary points of the constrained problem. The computational behavior of the method has been evaluated both on a set of test problems and on a real application. The obtained results and the comparison with other well-known derivative-free software show the viability of the proposed sequential penalty approach.File | Dimensione | Formato | |
---|---|---|---|
absSIAM10.pdf
Accesso chiuso
Tipologia:
Altro
Licenza:
Tutti i diritti riservati
Dimensione
26.93 kB
Formato
Adobe PDF
|
26.93 kB | Adobe PDF | Richiedi una copia |
PENSEQ_SIOPT_PUBL.pdf
accesso aperto
Tipologia:
Versione finale referata (Postprint, Accepted manuscript)
Licenza:
Open Access
Dimensione
262.07 kB
Formato
Adobe PDF
|
262.07 kB | Adobe PDF |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.