A derivative-free exact penalty algorithm: basic ideas, convergence theory and computational studies
We present a trust-region method for the solution of constrained optimization problems where the gradients of the objective and constraints are not available. As this is a penalty method, infeasible iterates may be accepted, and the constraints need not be known explicitly, allowing to build constraint models through the optimization procedure. By the use of an exact penalty, under certain conditions for feasible problems, the method converges directly to a constrained optimum, once a sufficiently high penalty is selected. To address the fact that the ℓ1-penalty used is not in general differentiable near the active constraints, steps are computed in directions with guaranteed descent. A numerical comparison with other algorithms is presented for a set of analytical test problems, showing that the proposed algorithm achieves good performance. Computational results from a problem where the objective and constraint functions are the result of a simulation are also presented. The method is particularly appropriate for modestly sized problems where the computation of the search direction is not too expensive.