思路:線搜索最優化算法,一般是先確定迭代方向(下降方向),然后確定迭代步長;
信賴域方法直接求得迭代位移;
算法分析
第\(k\)次迭代,確定迭代位移的問題為(信賴域子問題):
其中\(\Delta_k\)為信賴域半徑
對於求得的迭代位移,實際下降量:
估計下降量:
定義:
一般情況,\(\Delta q_k>0\),因此,當\(r_k\)小於0時,表示求得的\(x_{k+1}\)不是下降方向;需要縮小信賴域重新求解;當\(r_k\)趨近於1時,表明對於函數的二次近似具有較高的精度,可以擴大信賴域;
算法
\(輸入:0<\eta_1<\eta_2<1,0<\tau_1<1<\tau_2,初始點x,初始hesse陣近似陣:B_0,容許誤差:\epsilon;信賴域半徑上限:\tilde{\Delta},初始信賴域半徑:\Delta_0\in(0,\tilde{\Delta}]\)
maxite=100; 最大迭代次數
g=gfun(x);
k=0;
while k < maxite and g > \(\epsilon\)
\(\qquad\) solve the child question of optizimation,get \(x_{k+1}\);
\(\qquad\)get \(r_k\);
\(\qquad\)if \(r_k <\eta_1\)
\(\qquad\) \(\qquad\) \(\Delta_{k+1}\)=\(\tau_1\Delta_k\);
\(\qquad\) elif \(\eta_1<r_k<\eta_2\)
\(\qquad\) \(\qquad\) \(\Delta_{k+1}=\Delta_{k};\)
\(\qquad\) else
\(\qquad\) \(\qquad\) \(\Delta_{k+1}=\tau_2\Delta_k;\)
\(\qquad\) endif
\(\qquad\) if \(r_k<\eta_1\)
\(\qquad\) \(\qquad\) \(x_{k+1}=x;\)
\(\qquad\) \(\qquad\) \(B_{k+1}=B_k;\)
\(\qquad\) else
\(\qquad\) \(\qquad\) \(refresh B_k, get B_{k+1};\)
\(\qquad\) endif
\(\qquad\) \(k=k+1;\)
\(\qquad\) \(g=gfun(x_{k+1});\)
endwhile
\(輸出:最優解x\)