Optimization Glossary
25 essential terms — because precise language is the foundation of clear thinking in Optimization.
Showing 25 of 25 terms
An exact algorithm for integer programming that partitions the problem into subproblems and uses bounds to prune the search tree.
Optimization over a finite or countably infinite set of discrete alternatives, such as paths, matchings, or schedules.
A condition expressed as an equality or inequality that restricts the feasible values of decision variables.
The property of an iterative algorithm where the sequence of solutions approaches the optimal value as iterations increase.
A property of a function or set ensuring that any local minimum is a global minimum and the line segment between any two points stays within the set.
A variable whose value is chosen by the optimizer to minimize or maximize the objective function.
The principle that every optimization problem (primal) has a corresponding dual problem whose solution provides bounds and sensitivity information.
A technique for solving problems with overlapping subproblems by storing intermediate results to avoid redundant computation.
The set of all variable values that satisfy every constraint in the problem simultaneously.
An evolutionary metaheuristic using selection, crossover, and mutation to evolve a population of solutions.
The vector of partial derivatives of a function, pointing in the direction of steepest ascent.
An iterative optimization algorithm that moves in the direction of the negative gradient to find a minimum.
The square matrix of second-order partial derivatives of a function, used to characterize curvature and in Newton's method.
Optimization in which some or all decision variables are required to take integer values.
Karush-Kuhn-Tucker conditions, the necessary conditions for optimality in nonlinear programming with inequality constraints.
An auxiliary variable introduced to incorporate equality constraints into the optimization, representing the shadow price of a constraint.
An optimization technique for a linear objective function subject to linear constraints.
A high-level strategy that guides subordinate heuristics to explore the solution space for approximately optimal solutions.
A second-order optimization method using the gradient and Hessian to achieve fast (quadratic) convergence near the optimum.
The function to be maximized or minimized in an optimization problem.
The set of all Pareto optimal solutions in a multi-objective optimization problem, representing the trade-off surface.
The change in the optimal objective value per unit increase in a constraint's right-hand side, given by the dual variable.
An algorithm that solves linear programs by traversing vertices of the feasible polytope.
A probabilistic metaheuristic that accepts worse solutions with decreasing probability to escape local optima.
A variant of gradient descent that estimates the gradient from a random mini-batch of data, widely used in machine learning.