機器學習4- 多元線性回歸+Python實現


1 多元線性回歸

更一般的情況,數據集 \(D\) 的樣本由 \(d\) 個屬性描述,此時我們試圖學得

\[f(\boldsymbol{x}_i) = \boldsymbol{w}^T\boldsymbol{x}_i+b \text{,使得} f(\boldsymbol{x}_i) \simeq y_i \]

稱為多元線性回歸multivariate linear regression)或多變量線性回歸

類似的,使用最小二乘法估計 \(\boldsymbol{w}\)\(b\)

\(f(\boldsymbol{x}_i) = \boldsymbol{w}^T\boldsymbol{x}_i+b\) 知:

\[f(\boldsymbol{x}_1) = w_1x_{11} + w_2x_{12} + ... + w_dx_{1d} + b \\ f(\boldsymbol{x}_2) = w_1x_{21} + w_2x_{22} + ... + w_dx_{2d} + b \\ ... ... \\ f(\boldsymbol{x}_m) = w_1x_{m1} + w_2x_{m2} + ... + w_dx_{md} + b \\ \]

我們記

\[\hat{\boldsymbol{w}} = (\boldsymbol{w};b) = \begin{pmatrix}w_1\\w_2\\ \vdots \\w_d\\b\end{pmatrix} \]

\[\boldsymbol{X} =\begin{pmatrix} x_{11} & x_{12} & \cdots & x_{1d} & 1 \\ x_{21} & x_{22} & \cdots & x_{2d} & 1 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ x_{m1} & x_{m2} & \cdots & x_{md} & 1 \end{pmatrix} =\begin{pmatrix} \boldsymbol{x}_1^T & 1 \\ \boldsymbol{x}_2^T & 1 \\ \vdots & \vdots \\ \boldsymbol{x}_m^T & 1 \end{pmatrix} \]

\[\boldsymbol{y} = (y_1;y_2;\cdots ;y_m) = \begin{pmatrix}y_1\\y_2\\ \vdots \\y_d\end{pmatrix} \]

可得:

\[\boldsymbol{y} = \boldsymbol{X}\hat{\boldsymbol{w}} \tag{1.1} \]

類似於前篇博客的式子 (2.3) 有:

\[\hat{\boldsymbol{w}}^* = \underset{\hat{\boldsymbol{w}}}{arg\ min} (\boldsymbol{y} - \boldsymbol{X}\hat{\boldsymbol{w}})^T(\boldsymbol{y} - \boldsymbol{X}\hat{\boldsymbol{w}}) \tag{1.2} \]

\(E_{\hat{\boldsymbol{w}}} = (\boldsymbol{y}-\boldsymbol{X}\hat{\boldsymbol{w}})^T(\boldsymbol{y}-\boldsymbol{X}\hat{\boldsymbol{w}})\),對 \(\hat{\boldsymbol{w}}\) 求導得:

\[\cfrac{\partial E_{\hat{\boldsymbol w}}}{\partial \hat{\boldsymbol w}}=2\mathbf{X}^T(\mathbf{X}\hat{\boldsymbol w}-\boldsymbol{y}) \tag{1.3} \]

令上式為零,得到 \(\hat{\boldsymbol{w}}\) 最優解的閉式解。
\(\boldsymbol{X}^T\boldsymbol{X}\) 為滿秩矩陣(full-rank matrix)或正定矩陣(positive define matrix)時,令式 (1.2) 為零可得:

\[\hat{\boldsymbol{w}}^* = (\boldsymbol{X}^T\boldsymbol{X})^{-1}\boldsymbol{X}^T\boldsymbol{y} \tag{1.4} \]

\(\hat{\boldsymbol{x}_i} = (\boldsymbol{x}_i, 1)\) 得到最終學得的多元線性回歸模型為:

\[f(\hat{\boldsymbol{x}}_i) = \hat{\boldsymbol{x}_i}^T(\boldsymbol{X}^T\boldsymbol{X})^{-1}\boldsymbol{X}^T\boldsymbol{y} \tag{1.5} \]

\(\boldsymbol{X}^T\boldsymbol{X}\) 不是滿秩矩陣時,可解出多個 \(\hat{\boldsymbol{w}}\) 使得均方誤差最小。選擇哪個解輸出取決於學習算法的歸納偏好。常用做法是引入正則化(regularization)項。

2 多元線性回歸的Python實現

現有如下數據,我們希望通過分析披薩的直徑、輔料數量與價格的線性關系,來預測披薩的價格:

2.1 手動實現

2.1.1 導入必要模塊

import numpy as np
import pandas as pd

2.1.2 加載數據

pizza = pd.read_csv("pizza_multi.csv", index_col='Id')
pizza

2.1.3 計算系數

由公式

\[\hat{\boldsymbol{w}}^* = (\boldsymbol{X}^T\boldsymbol{X})^{-1}\boldsymbol{X}^T\boldsymbol{y} \tag{2.11} \]

可計算出 \(\hat{\boldsymbol{w}}^*\) 的值。

我們將后 5 行數據作為測試集,其他為測試集:

X = pizza.iloc[:-5, :2].values
y = pizza.iloc[:-5, 2].values.reshape((-1, 1))
print(X)
print(y)
[[ 6  2]
 [ 8  1]
 [10  0]
 [14  2]
 [18  0]]
[[ 7. ]
 [ 9. ]
 [13. ]
 [17.5]
 [18. ]]
ones = np.ones(X.shape[0]).reshape(-1,1)
X = np.hstack((X,ones))
X
array([[ 6.,  2.,  1.],
       [ 8.,  1.,  1.],
       [10.,  0.,  1.],
       [14.,  2.,  1.],
       [18.,  0.,  1.]])
w_ = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y)
w_
array([[1.01041667],
       [0.39583333],
       [1.1875    ]])

即:

\[\hat{\boldsymbol{w}}^* = (\boldsymbol{w};b) = \begin{pmatrix}w_1\\w_2\\b\end{pmatrix} = \begin{pmatrix}1.01041667\\0.39583333\\1.1875\end{pmatrix} \]

\[f(\boldsymbol{x}) = 1.01041667x_1 + 0.39583333x_2 + 1.1875 \]

b = w_[-1]
w = w_[:-1]
print(w)
print(b)
[[1.01041667]
 [0.39583333]]
[1.1875]

2.1.4 預測

X_test = pizza.iloc[-5:, :2].values
y_test = pizza.iloc[-5:, 2].values.reshape((-1, 1))
print(X_test)
print(y_test)
[[ 8  2]
 [ 9  0]
 [11  2]
 [16  2]
 [12  0]]
[[11. ]
 [ 8.5]
 [15. ]
 [18. ]
 [11. ]]
y_pred = np.dot(X_test, w) + b
# y_pred = np.dot(np.hstack((X_test, ones)), w_)
print("目標值:\n", y_test)
print("預測值:\n", y_pred)
目標值:
 [[11. ]
 [ 8.5]
 [15. ]
 [18. ]
 [11. ]]
預測值:
 [[10.0625    ]
 [10.28125   ]
 [13.09375   ]
 [18.14583333]
 [13.3125    ]]

2.2 使用 sklearn

import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
# 讀取數據
pizza = pd.read_csv("pizza_multi.csv", index_col='Id')
X = pizza.iloc[:-5, :2].values
y = pizza.iloc[:-5, 2].values.reshape((-1, 1))
X_test = pizza.iloc[-5:, :2].values
y_test = pizza.iloc[-5:, 2].values.reshape((-1, 1))
# 線性擬合
model = LinearRegression()
model.fit(X, y)
# 預測
predictions = model.predict(X_test)
for i, prediction in enumerate(predictions):
    print('Predicted: %s, Target: %s' % (prediction, y_test[i]))
Predicted: [10.0625], Target: [11.]
Predicted: [10.28125], Target: [8.5]
Predicted: [13.09375], Target: [15.]
Predicted: [18.14583333], Target: [18.]
Predicted: [13.3125], Target: [11.]
# 模型評估
"""
使用 score 方法可以計算 R方
R方的范圍為 [0, 1]
R方越接近 1,說明擬合程度越好
"""
print('R-squared: %.2f' % model.score(X_test, y_test))
R-squared: 0.77

此文原創禁止轉載,轉載文章請聯系博主並注明來源和出處,謝謝!
作者: Raina_RLN https://www.cnblogs.com/raina/


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM