Optimization Cheat Sheet
The core ideas of Optimization distilled into a single, scannable reference — perfect for review or quick lookup.
Quick Reference
Objective Function
A mathematical function that quantifies the goal of an optimization problem, expressing the quantity to be maximized (such as profit or efficiency) or minimized (such as cost or error). The optimizer searches for the values of decision variables that yield the best value of this function.
Constraints
Mathematical inequalities or equalities that define the set of feasible solutions in an optimization problem. Constraints represent physical, financial, or logical limitations that the decision variables must satisfy.
Linear Programming
A method for optimizing a linear objective function subject to linear equality and inequality constraints. It is one of the most widely used optimization techniques due to its computational efficiency and broad applicability.
Gradient Descent
An iterative first-order optimization algorithm that finds a local minimum of a differentiable function by repeatedly moving in the direction of the negative gradient (steepest descent). The step size is controlled by a learning rate parameter.
Convex Optimization
A subfield of optimization dealing with problems where the objective function is convex and the feasible set is a convex set. The key property is that any local minimum is guaranteed to be a global minimum, making these problems efficiently solvable.
Feasible Region
The set of all points (variable values) that satisfy every constraint in an optimization problem simultaneously. The optimal solution must lie within this region. In linear programming, the feasible region is a convex polytope.
Global vs. Local Optima
A local optimum is a solution that is better than all nearby solutions, while a global optimum is the best solution over the entire feasible region. Nonconvex problems may have multiple local optima, making the search for the global optimum challenging.
Lagrange Multipliers
A mathematical method for finding the extrema of a function subject to equality constraints. It introduces auxiliary variables (multipliers) that measure the sensitivity of the optimal value to changes in the constraint, providing both the optimal solution and shadow prices.
Integer Programming
An optimization framework where some or all decision variables are restricted to integer values. This makes problems significantly harder than their continuous counterparts (NP-hard in general) but is essential for modeling discrete decisions.
Dynamic Programming
A method for solving complex optimization problems by breaking them down into simpler overlapping subproblems and storing their solutions to avoid redundant computation. It applies to problems exhibiting optimal substructure and overlapping subproblems.
Key Terms at a Glance
Get study tips in your inbox
We'll send you evidence-based study strategies and new cheat sheets as they're published.
We'll notify you about updates. No spam, unsubscribe anytime.