About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
Computational and Applied Mathematics
Paper
A derivative-free exact penalty algorithm: basic ideas, convergence theory and computational studies
Abstract
We present a trust-region method for the solution of constrained optimization problems where the gradients of the objective and constraints are not available. As this is a penalty method, infeasible iterates may be accepted, and the constraints need not be known explicitly, allowing to build constraint models through the optimization procedure. By the use of an exact penalty, under certain conditions for feasible problems, the method converges directly to a constrained optimum, once a sufficiently high penalty is selected. To address the fact that the ℓ1-penalty used is not in general differentiable near the active constraints, steps are computed in directions with guaranteed descent. A numerical comparison with other algorithms is presented for a set of analytical test problems, showing that the proposed algorithm achieves good performance. Computational results from a problem where the objective and constraint functions are the result of a simulation are also presented. The method is particularly appropriate for modestly sized problems where the computation of the search direction is not too expensive.