Published in

Taylor and Francis Group, Optimization Methods and Software, 3(30), p. 424-460

DOI: 10.1080/10556788.2014.924514

Links

Tools

Export citation

Search in Google Scholar

Convex and concave relaxations of implicit functions

Journal article published in 2014 by Matthew D. Stuber ORCID, Joseph K. Scott, Paul I. Barton
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Red circle
Preprint: archiving forbidden
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

A deterministic algorithm for solving nonconvex NLPs globally using a reduced-space approach is presented. These problems are encountered when real-world models are involved as nonlinear equality constraints and the decision variables include the state variables of the system. By solving the model equations for the dependent (state) variables as implicit functions of the independent (decision) variables, a signifcant reduction in dimensionality can be obtained. As a result, the inequality constraints and objective function are implicit functions of the independent variables, which can be estimated via a�fixed-point iteration. Relying on the recently developed ideas of generalized McCormick relaxations and McCormick-based relaxations of algorithms and subgradient propagation, the development of McCormick relaxations of implicit functions is presented. Using these ideas, the reduced space, implicit optimization formulation can be relaxed. When applied within a branch-and-bound framework, �finite convergence to e�psilon-optimal global solutions is guaranteed.