二維直線回歸(擬合)算法(二)


三、高斯牛頓法(Gauss-Newton),列文伯格-馬奎爾特法(Levenberg-Marquardt)

下面是離散數據樣本集的最小化函數,高斯牛頓算法就是通過迭代發現以下此函數的最小值:

 

依據高斯牛頓算法,對於直線函數,β為自變量參數矩陣[a,b]:

δ為自變量a,b梯度矩陣,  Jf為f(Xi,β)函數的雅可比矩陣,即f(Xi,β)函數自變量的偏導數矩陣[Xi ,1]

而列文伯格-馬奎爾特法是在 的對角線上增加了λ:

在有些情況下,可不用解逆矩陣,由於其是對角正定型,可以用喬勒斯基分解法(Cholesky_decomposition)求解。

 1         public static double[] GaussNewton(List<Point> points)
 2         {
 3             double[] a = new double[] { 100, -100 };
 4             double chi2 = 0;
 5             double[] da = new double[2];
 6             double[] ia = new double[2];
 7             double[,] tj = new double[2, 2];
 8             double[,] itj = new double[2, 2];
 9 
10             for (int iter = 0; iter < 300; iter++)
11             {
12                 double[] beta = new double[2];
13                 for (int i = 0; i < points.Count; i++)
14                 {
15                     tj[0, 0] += (double)(points[i].X * points[i].X);
16                     tj[0, 1] += (double)(points[i].X);
17                     tj[1, 0] += (double)(points[i].X);
18                     tj[1, 1] += 1.0;
19                     double dy = points[i].Y - points[i].X * a[0] - a[1];
20                     chi2 += dy * dy;
21                     beta[0] += dy * points[i].X;
22                     beta[1] += dy;
23                 }
24 
25                 double jm = tj[0, 0] * tj[1, 1] - tj[0, 1] * tj[1, 0];
26                 itj[0, 0] = tj[1, 1] / jm;
27                 itj[0, 1] = -tj[1, 0] / jm;
28                 itj[1, 0] = -tj[0, 1] / jm;
29                 itj[1, 1] = tj[0, 0] / jm;
30 
31 
32                 da[0] = (itj[0, 0] * beta[0] + itj[0, 1] * beta[1]);
33                 da[1] = (itj[1, 0] * beta[0] + itj[1, 1] * beta[1]);
34 
35                 ia[0] = (a[0] + da[0]);
36                 ia[1] = (a[1] + da[1]);
37 
38                 double incrementedChi2 = 0;
39                 for (int i = 0; i < points.Count; i++)
40                 {
41                     double dy = points[i].Y - points[i].X * ia[0] - ia[1];
42                     incrementedChi2 += dy * dy;
43                 }
44 
45                 if (incrementedChi2 < chi2)
46                 {
47                     a = ia;
48                 }
49             }
50             return a;
51         }
 1         public static double[] LevenbergMarquardt(List<Point> points, double lambda)
 2         {
 3             double[] a = new double[] { 100, -100 };
 4             double chi2 = 0;
 5             double[] da = new double[2];
 6             double[] ia = new double[2];
 7             double[,] tj = new double[2, 2];
 8             double[,] itj = new double[2, 2];
 9 
10             for (int iter = 0; iter < 300; iter++)
11             {
12                 double[] beta = new double[2];
13                 for (int i = 0; i < points.Count; i++)
14                 {
15                     tj[0, 0] += (double)(points[i].X * points[i].X);
16                     tj[0, 1] += (double)(points[i].X);
17                     tj[1, 0] += (double)(points[i].X);
18                     tj[1, 1] += 1.0;
19                     double dy = points[i].Y - points[i].X * a[0] - a[1];
20                     chi2 += dy * dy;
21                     beta[0] += dy * points[i].X;
22                     beta[1] += dy;
23                 }
24 
25                 tj[0, 0] *= (1.0 + lambda);
26                 tj[1, 1] *= (1.0 + lambda);
27 
28                 double jm = tj[0, 0] * tj[1, 1] - tj[0, 1] * tj[1, 0];
29                 itj[0, 0] = tj[1, 1] / jm;
30                 itj[0, 1] = -tj[1, 0] / jm;
31                 itj[1, 0] = -tj[0, 1] / jm;
32                 itj[1, 1] = tj[0, 0] / jm;
33 
34 
35                 da[0] = (itj[0, 0] * beta[0] + itj[0, 1] * beta[1]);
36                 da[1] = (itj[1, 0] * beta[0] + itj[1, 1] * beta[1]);
37 
38                 ia[0] = (a[0] + da[0]);
39                 ia[1] = (a[1] + da[1]);
40 
41                 double incrementedChi2 = 0;
42                 for (int i = 0; i < points.Count; i++)
43                 {
44                     double dy = points[i].Y - points[i].X * ia[0] - ia[1];
45                     incrementedChi2 += dy * dy;
46                 }
47                 if (incrementedChi2 >= chi2)
48                 {
49                     lambda *= 10;
50                 }
51                 else
52                 {
53                     lambda /= 10;
54                     a = ia;
55                 }
56             }
57             return a;
58         }

總結:直線擬合的簡單方法就是最小二乘法,梯度下降、高斯牛頓、列-馬算法主要用於非線性系統,這里主要是為便於理解其用法,在以后的神經網絡、機器學習、機器視覺中,這些算法是基礎。

測試文件下載

看看維基中的解釋:梯度下降法  高斯牛頓法  列文伯格-馬奎爾特算法


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM