Approximate KKT points and a proximity measure for termination

Karush–Kuhn–Tucker (KKT) optimality conditions are often checked for investigating whether a solution obtained by an optimization algorithm is a likely candidate for the optimum. In this study, we report that although the KKT conditions must all be satisfied at the optimal point, the extent of violation of KKT conditions at points arbitrarily close to the KKT point is not smooth, thereby making the KKT conditions difficult to use directly to evaluate the performance of an optimization algorithm. This happens due to the requirement of complimentary slackness condition associated with KKT optimality conditions. To overcome this difficulty, we define modified $${\epsilon}$$-KKT points by relaxing the complimentary slackness and equilibrium equations of KKT conditions and suggest a KKT-proximity measure, that is shown to reduce sequentially to zero as the iterates approach the KKT point. Besides the theoretical development defining the modified $${\epsilon}$$-KKT point, we present extensive computer simulations of the proposed methodology on a set of iterates obtained through an evolutionary optimization algorithm to illustrate the working of our proposed procedure on smooth and non-smooth problems. The results indicate that the proposed KKT-proximity measure can be used as a termination condition to optimization algorithms. As a by-product, the method helps to find Lagrange multipliers correspond to near-optimal solutions which can be of importance to practitioners. We also provide a comparison of our KKT-proximity measure with the stopping criterion used in popular commercial softwares.

[1]  Stephen J. Wright,et al.  Numerical Optimization , 2018, Fundamental Statistical Inference.

[2]  C. M. Reeves,et al.  Function minimization by conjugate gradients , 1964, Comput. J..

[3]  J. M. Martínez,et al.  On sequential optimality conditions for smooth constrained optimization , 2011 .

[4]  R. Andreani,et al.  On sequential optimality conditions for smooth constrained optimization , 2011 .

[5]  Andreas H. Hamel An ϵ-lagrange multiplier rule for a mathematical programming problem on banacch spaces , 2001 .

[6]  Stephen J. Wright,et al.  Numerical Optimization (Springer Series in Operations Research and Financial Engineering) , 2000 .

[7]  Klaus Schittkowski,et al.  More test examples for nonlinear programming codes , 1981 .

[8]  丸山 徹 Convex Analysisの二,三の進展について , 1977 .

[9]  Jing J. Liang,et al.  Problem Deflnitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter Optimization , 2006 .

[10]  Jorge Nocedal,et al.  An Interior Point Algorithm for Large-Scale Nonlinear Programming , 1999, SIAM J. Optim..

[11]  Kaisa Miettinen,et al.  Nonlinear multiobjective optimization , 1998, International series in operations research and management science.

[12]  K. Deb An Efficient Constraint Handling Method for Genetic Algorithms , 2000 .

[13]  Klaus Schittkowski,et al.  Test examples for nonlinear programming codes , 1980 .

[14]  F. Clarke Optimization And Nonsmooth Analysis , 1983 .

[15]  Jorge Nocedal,et al.  Knitro: An Integrated Package for Nonlinear Optimization , 2006 .

[16]  Kalyanmoy Deb,et al.  Finding trade-off solutions close to KKT points using evolutionary multi-objective optimization , 2007, 2007 IEEE Congress on Evolutionary Computation.

[17]  William W. Hager,et al.  Error estimation in nonlinear optimization , 2014, J. Glob. Optim..

[18]  José Mario Martínez,et al.  A New Sequential Optimality Condition for Constrained Optimization and Algorithmic Consequences , 2010, SIAM J. Optim..