3.1
式3.2 $f(x)=\omega ^{T}x+b$ 中,$\omega ^{T}$ 和b有各自的意義,簡單來說,$\omega ^{T}$ 決定學習得到模型(直線、平面)的方向,而b則決定截距,當學習得到的模型恰好經過原點時,可以不考慮偏置項b。偏置項b實質上就是體現擬合模型整體上的浮動,可以看做是其它變量留下的偏差的線性修正,因此一般情況下是需要考慮偏置項的。但如果對數據集進行了歸一化處理,即對目標變量減去均值向量,此時就不需要考慮偏置項了。
3.2
對區間[a,b]上定義的函數f(x),若它對區間中任意兩點x1,x2均有$f(\frac{x1+x2}{2})\leq \frac{f(x1)+f(x2)}{2}$,則稱f(x)為區間[a,b]上的凸函數。對於實數集上的函數,可通過二階導數來判斷:若二階導數在區間上非負,則稱為凸函數,在區間上恆大於零,則稱為嚴格凸函數。
對於式3.18 $y=\frac{1}{1+e^{-(\omega ^{T}x+b)}}$,有
$\frac{dy}{d\omega ^{T}}=\frac{1}{(1+e^{-(\omega ^{T}x+b)})^{2}}e^{-(\omega ^{T}x+b)}(-x)=(-x)\frac{1}{1+e^{-(\omega ^{T}x+b)}}(1-\frac{1}{1+e^{-(\omega ^{T}x+b)}})=xy(y-1)=x(y^{2}-y)$
$\frac{d}{d\omega ^{T}}(\frac{dy}{d\omega ^{T}})=x(2y-1)(\frac{dy}{d\omega ^{T}})=x^{2}y(2y-1)(y-1)$
其中,y的取值范圍是(0,1),不難看出二階導有正有負,所以該函數非凸。
3.3
對率回歸即Logis regression
西瓜集數據如圖所示:

將好瓜這一列變量用0/1變量代替,進行對率回歸學習,python代碼如下:
import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn import model_selection from sklearn.linear_model import LogisticRegression from sklearn import metrics dataset = pd.read_csv('/home/zwt/Desktop/watermelon3a.csv') #數據預處理
X = dataset[['密度','含糖率']] Y = dataset['好瓜'] good_melon = dataset[dataset['好瓜'] == 1] bad_melon = dataset[dataset['好瓜'] == 0] #畫圖
f1 = plt.figure(1) plt.title('watermelon_3a') plt.xlabel('density') plt.ylabel('radio_sugar') plt.xlim(0,1) plt.ylim(0,1) plt.scatter(bad_melon['密度'],bad_melon['含糖率'],marker='o',color='r',s=100,label='bad') plt.scatter(good_melon['密度'],good_melon['含糖率'],marker='o',color='g',s=100,label='good') plt.legend(loc='upper right') #分割訓練集和驗證集
X_train,X_test,Y_train,Y_test = model_selection.train_test_split(X,Y,test_size=0.5,random_state=0) #訓練
log_model = LogisticRegression() log_model.fit(X_train,Y_train) #驗證
Y_pred = log_model.predict(X_test) #匯總
print(metrics.confusion_matrix(Y_test, Y_pred)) print(metrics.classification_report(Y_test, Y_pred, target_names=['Bad','Good'])) print(log_model.coef_) theta1, theta2 = log_model.coef_[0][0], log_model.coef_[0][1] X_pred = np.linspace(0,1,100) line_pred = theta1 + theta2 * X_pred plt.plot(X_pred, line_pred) plt.show()
import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn import model_selection from sklearn.linear_model import LogisticRegression from sklearn import metrics dataset = pd.read_csv('/home/zwt/Desktop/watermelon3a.csv') #數據預處理
X = dataset[['密度','含糖率']] Y = dataset['好瓜'] good_melon = dataset[dataset['好瓜'] == 1] bad_melon = dataset[dataset['好瓜'] == 0] #畫圖
f1 = plt.figure(1) plt.title('watermelon_3a') plt.xlabel('density') plt.ylabel('radio_sugar') plt.xlim(0,1) plt.ylim(0,1) plt.scatter(bad_melon['密度'],bad_melon['含糖率'],marker='o',color='r',s=100,label='bad') plt.scatter(good_melon['密度'],good_melon['含糖率'],marker='o',color='g',s=100,label='good') plt.legend(loc='upper right') #分割訓練集和驗證集
X_train,X_test,Y_train,Y_test = model_selection.train_test_split(X,Y,test_size=0.5,random_state=0) #訓練
log_model = LogisticRegression() log_model.fit(X_train,Y_train) #驗證
Y_pred = log_model.predict(X_test) #匯總
print(metrics.confusion_matrix(Y_test, Y_pred)) print(metrics.classification_report(Y_test, Y_pred)) print(log_model.coef_) theta1, theta2 = log_model.coef_[0][0], log_model.coef_[0][1] X_pred = np.linspace(0,1,100) line_pred = theta1 + theta2 * X_pred plt.plot(X_pred, line_pred) plt.show()
模型效果輸出(查准率、查全率、預測效果評分):
precision recall f1-score support Bad 0.75 0.60 0.67 5
Good 0.60 0.75 0.67 4
micro avg 0.67 0.67 0.67 9 macro avg 0.68 0.68 0.67 9 weighted avg 0.68 0.67 0.67 9
也可以輸出驗證集的實際結果和預測結果:
密度 含糖率 Y_test Y_pred 1 0.774 0.376 1 1
6 0.481 0.149 1 0
8 0.666 0.091 0 0
9 0.243 0.267 0 1
13 0.657 0.198 0 0
4 0.556 0.215 1 1
2 0.634 0.264 1 1
14 0.360 0.370 0 1
10 0.245 0.057 0 0
3.4
首先附上使用葡萄酒品質數據做的對率回歸學習代碼
import numpy as np import matplotlib.pyplot as plt import pandas as pd pd.set_option('display.max_rows',None) pd.set_option('max_colwidth',200) pd.set_option('expand_frame_repr', False) from sklearn import model_selection from sklearn.linear_model import LogisticRegression from sklearn import metrics dataset = pd.read_csv('/home/zwt/Desktop/winequality-red_new.csv') #數據預處理 dataset['quality2'] = dataset['quality'].apply(lambda x: 0 if x < 5 else 1) #新加入二分類變量是否為好酒,基於原數據中quality的值,其大於等於5就定義為好酒,反之壞酒 X = dataset[["fixed_acidity","volatile_acidity","citric_acid","residual_sugar","chlorides","free_sulfur_dioxide","total_sulfur_dioxide","density","pH","sulphates","alcohol"]] Y = dataset["quality2"] #分割訓練集和驗證集 X_train,X_test,Y_train,Y_test = model_selection.train_test_split(X,Y,test_size=0.5,random_state=0) #訓練 log_model = LogisticRegression() log_model.fit(X_train,Y_train) #驗證 Y_pred = log_model.predict(X_test) #匯總 print(metrics.confusion_matrix(Y_test, Y_pred)) print(metrics.classification_report(Y_test, Y_pred)) print(log_model.coef_)
其中,從UCI下載的數據集格式有問題,無法直接使用,先編寫程序將格式調整完畢再使用數據
fr = open('/home/zwt/Desktop/winequality-red.csv','r',encoding='utf-8') fw = open('/home/zwt/Desktop/winequality-red_new.csv','w',encoding='utf-8') f = fr.readlines() for line in f: line = line.replace(';',',') fw.write(line) fr.close() fw.close()
兩種方法的錯誤率比較
from sklearn.linear_model import LogisticRegression from sklearn import model_selection from sklearn.datasets import load_wine # 載入wine數據 dataset = load_wine() #10次10折交叉驗證法生成訓練集和測試集 def tenfolds(): k = 0 truth = 0 while k < 10: kf = model_selection.KFold(n_splits=10, random_state=None, shuffle=True) for x_train_index, x_test_index in kf.split(dataset.data): x_train = dataset.data[x_train_index] y_train = dataset.target[x_train_index] x_test = dataset.data[x_test_index] y_test = dataset.target[x_test_index] # 用對率回歸進行訓練,擬合數據 log_model = LogisticRegression() log_model.fit(x_train, y_train) # 用訓練好的模型預測 y_pred = log_model.predict(x_test) for i in range(len(x_test)): #這里和留一法不同,是因為10折交叉驗證的驗證集是len(dataset.target)/10,驗證集的預測集也是,都是一個列表,是一串數字,而留一法是一個數字 if y_pred[i] == y_test[i]: truth += 1 k += 1 # 計算精度 accuracy = truth/(len(x_train)+len(x_test)) #accuracy = truth/len(dataset.target) print("用10次10折交叉驗證對率回歸的精度是:", accuracy) tenfolds() #留一法 def leaveone(): loo = model_selection.LeaveOneOut() i = 0 true = 0 while i < len(dataset.target): for x_train_index, x_test_index in loo.split(dataset.data): x_train = dataset.data[x_train_index] y_train = dataset.target[x_train_index] x_test = dataset.data[x_test_index] y_test = dataset.target[x_test_index] # 用對率回歸進行訓練,擬合數據 log_model = LogisticRegression() log_model.fit(x_train, y_train) # 用訓練好的模型預測 y_pred = log_model.predict(x_test) if y_pred == y_test: true += 1 i += 1 # 計算精度 accuracy = true / len(dataset.target) print("用留一法驗證對率回歸的精度是:", accuracy) leaveone()
3.5
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn import model_selection from sklearn import metrics dataset = pd.read_csv('/home/zwt/Desktop/watermelon3a.csv') #數據預處理 X = dataset[['密度','含糖率']] Y = dataset['好瓜'] #分割訓練集和驗證集 X_train,X_test,Y_train,Y_test = model_selection.train_test_split(X,Y,test_size=0.5,random_state=0) #訓練 LDA_model = LinearDiscriminantAnalysis() LDA_model.fit(X_train,Y_train) #驗證 Y_pred = LDA_model.predict(X_test) #匯總 print(metrics.confusion_matrix(Y_test, Y_pred)) print(metrics.classification_report(Y_test, Y_pred, target_names=['Bad','Good'])) print(LDA_model.coef_) #畫圖 good_melon = dataset[dataset['好瓜'] == 1] bad_melon = dataset[dataset['好瓜'] == 0] plt.scatter(bad_melon['密度'],bad_melon['含糖率'],marker='o',color='r',s=100,label='bad') plt.scatter(good_melon['密度'],good_melon['含糖率'],marker='o',color='g',s=100,label='good')
import numpy as np import matplotlib.pyplot as plt data = [[0.697, 0.460, 1], [0.774, 0.376, 1], [0.634, 0.264, 1], [0.608, 0.318, 1], [0.556, 0.215, 1], [0.403, 0.237, 1], [0.481, 0.149, 1], [0.437, 0.211, 1], [0.666, 0.091, 0], [0.243, 0.267, 0], [0.245, 0.057, 0], [0.343, 0.099, 0], [0.639, 0.161, 0], [0.657, 0.198, 0], [0.360, 0.370, 0], [0.593, 0.042, 0], [0.719, 0.103, 0]] #數據集按瓜好壞分類 data = np.array([i[:-1] for i in data]) X0 = np.array(data[:8]) X1 = np.array(data[8:]) #求正反例均值 miu0 = np.mean(X0, axis=0).reshape((-1, 1)) miu1 = np.mean(X1, axis=0).reshape((-1, 1)) #求協方差 cov0 = np.cov(X0, rowvar=False) cov1 = np.cov(X1, rowvar=False) #求出w S_w = np.mat(cov0 + cov1) Omiga = S_w.I * (miu0 - miu1) #畫出點、直線 plt.scatter(X0[:, 0], X0[:, 1], c='b', label='+', marker = '+') plt.scatter(X1[:, 0], X1[:, 1], c='r', label='-', marker = '_') plt.plot([0, 1], [0, -Omiga[0] / Omiga[1]], label='y') plt.xlabel('密度', fontproperties='SimHei', fontsize=15, color='green'); plt.ylabel('含糖率', fontproperties='SimHei', fontsize=15, color='green'); plt.title(r'LinearDiscriminantAnalysis', fontproperties='SimHei', fontsize=25); plt.legend() plt.show()

3.6
對於非線性可分的數據,要想使用判別分析,一般思想是將其映射到更高維的空間上,使它在高維空間上線性可分進一步使用判別分析。
3.7
3.8
理論上的(糾錯輸出碼)ECOC碼能理想糾錯的重要條件是每個碼位出錯的概率相當,因為如果某個碼位的錯誤率很高,會導致這位始終保持相同的結果,不再有分類作用,這就相當於全0或者全 1的分類器。
3.9
書中提到,對於OvR,MvM來說,由於對每個類進行了相同的處理,其拆解出的二分類任務中類別不平衡的影響會相互抵消,因此通常不需要專門處理。以ECOC編碼為例,每個生成的二分類器會將所有樣本分成較為均衡的二類,使類別不平衡的影響減小。當然拆解后仍然可能出現明顯的類別不平衡現象,比如一個超級大類和一群小類。
3.10
數據集(密碼rw81)

