增廣拉格朗日乘子法(Augmented Lagrange Method)


轉載自:增廣拉格朗日乘子法(Augmented Lagrange Method)

增廣拉格朗日乘子法的作用是用來解決等式約束下的優化問題,

 

假定需要求解的問題如下:

    minimize   f(X)

    s.t.:     h(X)=0

其中,f:Rn->R; h:Rn->Rm

 

朴素拉格朗日乘子法的解決方案是:

    L(X,λ)=f(X)+μh(X);  μ:Rm

    此時,求解L對X和μ的偏導同時為零就可以得到最優解了。

 

增廣拉格朗日乘子法的解決方案是:

    Lc(x,λ)=f(X)+μh(X)+1/2c|h(X)|2

    每次求出一個xi,然后按照梯度更新參數μ,c每次迭代逐漸增大(使用ALM方法好像還有一些假設條件)

    整個流程只需要幾步就可以完成了,一直迭代就可得到最優解了。

    

 

 

參考文獻:

  [1]Multiplier and Gradient Methods,1969

  [2]constrained optimization and lagrange multiplier methods(page 104),1982

 

wiki:https://en.wikipedia.org/wiki/Augmented_Lagrangian_method

Let us say we are solving the following constrained problem:

\min f({\mathbf  {x}})

subject to

c_{i}({\mathbf  {x}})=0~\forall i\in I.

This problem can be solved as a series of unconstrained minimization problems. For reference, we first list the penalty method approach:

\min \Phi _{k}({\mathbf  {x}})=f({\mathbf  {x}})+\mu _{k}~\sum _{{i\in I}}~c_{i}({\mathbf  {x}})^{2}

The penalty method solves this problem, then at the next iteration it re-solves the problem using a larger value of \mu _{k} (and using the old solution as the initial guess or "warm-start").

The augmented Lagrangian method uses the following unconstrained objective:

\min \Phi _{k}({\mathbf  {x}})=f({\mathbf  {x}})+{\frac  {\mu _{k}}{2}}~\sum _{{i\in I}}~c_{i}({\mathbf  {x}})^{2}-\sum _{{i\in I}}~\lambda _{i}c_{i}({\mathbf  {x}})

and after each iteration, in addition to updating\mu _{k}, the variable \lambda  is also updated according to the rule

\lambda _{i}\leftarrow \lambda _{i}-\mu _{k}c_{i}({\mathbf  {x}}_{k})

where {\mathbf  {x}}_{k} is the solution to the unconstrained problem at the kth step, i.e. {\mathbf  {x}}_{k}={\text{argmin}}\Phi _{k}({\mathbf  {x}})

The variable \lambda  is an estimate of the Lagrange multiplier, and the accuracy of this estimate improves at every step. The major advantage of the method is that unlike the penalty method, it is not necessary to take \mu \rightarrow \infty  in order to solve the original constrained problem. Instead, because of the presence of the Lagrange multiplier term, \mu  can stay much smaller.

The method can be extended to handle inequality constraints. For a discussion of practical improvements, see.[4]

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM