In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) is a strategy for finding the local maxima and minima of a function subject to equality constraints.
(Figure) Find x and y to maximize f(x,y) subject to a constraint (shown in red) g(x,y) = c.
For instance (above), consider the optimization problem
- maximize f(x,y)
- subject to g(x,y) = c.
We need both f and g to have continuous first partial derivatives. We introduce a new variable (λ) called a Lagrange multiplier and study the Lagrange function (or Lagrangian) defined by
where the λ term may be either added or subtracted. If f(x0,y0) is a maximum of f(x,y) for the original constrained problem, then there exists λ0 such that (x0,y0,λ0) is a stationary point for the Lagrange function (stationary points are those points where the partial derivatives of Λ are zero, i.e. ∇Λ = 0).
However, not all stationary points yield a solution of the original problem. Thus, the method of Lagrange multipliers yields a necessary condition for optimality in constrained problems. Sufficient conditions for a minimum or maximum also exist.
References
- http://en.wikipedia.org/wiki/Lagrange_multiplier
- An Introduction to Lagrange Multipliers
- 등식 제약조건하의 극대 극소
- http://erde.tistory.com/464
- 라그랑지 승수를 활용한 제약조건하 최적화 제1계조건
- 라그랑지 승수를 활용한 제약조건하 최적화 제2계조건
- 최적화 및 수치해석 - 01. 최적화