MLP(SGD or Adam) Perceptron Neural Network Working by Pytorch(including data preprocessing)


通過MLP多層感知機神經網絡訓練模型,使之能夠根據sonar的六十個特征成功預測物體是金屬還是石頭。由於是簡單的linearr線性仿射層,所以網絡模型的匹配度並不高。

這是我的第一篇隨筆,就拿這個來練練手吧(O(∩_∩)O)。

相關文件可到github下載。本案例采用python編寫。(Juypter notebook)

首先導入所需的工具包

 1 import numpy as np  2 import pandas as pd  3 import matplotlib.pyplot as plt  4 import seaborn as sns  5 import torch  6 %matplotlib inline  7 
 8 plt.rcParams['figure.figsize'] = (4, 4)  9 plt.rcParams['figure.dpi'] = 150
10 plt.rcParams['lines.linewidth'] = 3
11 sns.set() 12 #初始化定義

相關工具包可到官網查看其功能。接下來進入數據的預處理部分。

傳統的csv文件一般帶有特征標志,例如下面的’tips.csv‘。

1 data = sns.load_dataset("tips") 2 data.head(5)

結果如下:

 

而現在要訓練的數據是不帶有total_bill,tip,sex這些特征標志的 。

所以要在read_csv的時候加入header=None用於默認創建一個索引。

origin_data = pd.read_csv('sonar.csv',header=None ) origin_data.head(5)

 

此時數據集建立完畢,結果如下:

 

 

0 1 2 3 4 5 6 7 8 9 ... 51 52 53 54 55 56 57 58 59 60
0 0.0200 0.0371 0.0428 0.0207 0.0954 0.0986 0.1539 0.1601 0.3109 0.2111 ... 0.0027 0.0065 0.0159 0.0072 0.0167 0.0180 0.0084 0.0090 0.0032 R
1 0.0453 0.0523 0.0843 0.0689 0.1183 0.2583 0.2156 0.3481 0.3337 0.2872 ... 0.0084 0.0089 0.0048 0.0094 0.0191 0.0140 0.0049 0.0052 0.0044 R
2 0.0262 0.0582 0.1099 0.1083 0.0974 0.2280 0.2431 0.3771 0.5598 0.6194 ... 0.0232 0.0166 0.0095 0.0180 0.0244 0.0316 0.0164 0.0095 0.0078 R
3 0.0100 0.0171 0.0623 0.0205 0.0205 0.0368 0.1098 0.1276 0.0598 0.1264 ... 0.0121 0.0036 0.0150 0.0085 0.0073 0.0050 0.0044 0.0040 0.0117 R
4 0.0762 0.0666 0.0481 0.0394 0.0590 0.0649 0.1209 0.2467 0.3564 0.4459 ... 0.0031 0.0054 0.0105 0.0110 0.0015 0.0072 0.0048 0.0107 0.0094 R

5 rows × 61 columns

 

 

該數據集有61列,其中最后一列應作為所要預測的數據。而觀察最后一列可以看到數據為字符類型,而這在訓練模

型時是不允許的,故將第六十一列提取並將字符R改為1,M改為0,即用1代表R,用0代表M,達到訓練模型的要求。

代碼如下:

y_data = origin_data.iloc[:,60] y_data.head(5)#分出需要預測的數據並檢驗
y_data.shape

調用y_data.shape查看共有多少個數據,以調用循環修改R、M。該數據集共有208個數據。代碼如下:

Y=y_data.copy()#由於DataFrame復制會報警,故采用copy
   for i in range(208): if(y_data[i]=='R'): Y[i]=1
        else: Y[i]=0 #將數據R轉化為1,數據M轉化為0

而后提取數據前六十列作為x數據集用於預測Y。在提取后,將x數據進行標准化處理(之前就是因為沒有標准化而導致訓練的模型loss曲線上下跌宕)。代碼如下:

1 from sklearn.preprocessing import scale 2 x_data=origin_data.iloc[:,:-1] 3 x_data = scale(x_data)

而后將數據x_data,y_data分為訓練集和測試集,分割比例為4:1(size=0.2)。將train,test集打包成dataset。這里為了減少GPU的負載,采用Mini-Batch分割數據,調用了dataloader自動將數據集分割成10個batch。

 1 x_data=x_data  2 y_data=Y  3 x_data = np.array(x_data).reshape(208,60)  4 y_data = np.array(y_data).reshape(208,)  5 y_data = y_data.tolist()#重新轉化為list形式方便split
 6 x_data = x_data.tolist()  7 #split為train和test集合
 8 from sklearn.model_selection import train_test_split  9 from sklearn.preprocessing import OneHotEncoder 10 #X_train,X_test,y_train,y_test = train_test_split(x_data,y_data,test_size=0.2)
11 X_train, X_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.2) 12 from torch.utils.data import TensorDataset, DataLoader 13 train_dataset = TensorDataset(torch.Tensor(X_train), 14  torch.LongTensor(y_train)) 15 
16 test_dataset = TensorDataset(torch.Tensor(X_test), 17                               torch.LongTensor(y_test))#封裝打包
18 TRAIN_SIZE = np.array(X_train).shape[0] 19 BATCH_SIZE = 10
20 NUM_EPOCH = 200
21 iters_per_epoch = TRAIN_SIZE // BATCH_SIZE 22 #采用mini——batch進行迭代,將訓練數據分為10份,共迭代200次,共200*int(166/10)=3200次
23 train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True) 24 test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True) 25 #打包成loader形式自動分割樣本

MLP模型定義類代碼如下:(應用了nn.sequential序列構建模型,采用了三層hidden_layer,且中間采用ReLu Function激活函數,最后采用輸出在[0,1]之間的Softmax激活函數,模型較簡單)。

 1 from torch import nn#nn.sequiential()
 2 class MLP(nn.Module):  3     
 4     def __init__(self, in_dim, hid_dim1, hid_dim2,hid_dim3, out_dim):  5         super(MLP, self).__init__()  6         self.layers = nn.Sequential(  7  nn.Linear(in_dim, hid_dim1),  8  nn.ReLU(),  9  nn.Linear(hid_dim1, hid_dim2), 10  nn.ReLU(), 11  nn.Linear(hid_dim2,hid_dim3), 12  nn.ReLU(), 13  nn.Linear(hid_dim3, out_dim), 14                         nn.Softmax(dim=1)) 15         
16     def forward(self, x): 17         y = self.layers(x) 18         return y

創建一個以SGD為優化器的迭代網絡模型,代碼如下:

1 net = MLP(in_dim=60, hid_dim1=300, hid_dim2=180,hid_dim3=60, out_dim=10) 2 criterion = nn.CrossEntropyLoss()#采用交叉熵進行loss反饋
3 from torch import optim 4 optimizer = optim.SGD(params=net.parameters(), lr=0.1)#學習率0.1,SGD隨機梯度下降優化器
5 optimizer.zero_grad()# 每次優化前都要清空梯度,這里先清空防止意外發生
 1 #SGD迭代
 2 train_loss_history = []  3 test_acc_history = []  4 
 5 for epoch in range(NUM_EPOCH):  6     
 7     for i, data in enumerate(train_loader):  8         
 9         inputs, labels = data 10         
11  optimizer.zero_grad() 12         outputs = net(inputs) 13                 
14         loss = criterion(outputs, labels) 15  loss.backward() 16         
17  optimizer.step() 18         
19         train_loss = loss.tolist() 20  train_loss_history.append(train_loss) 21         
22         if (i+1) % iters_per_epoch == 0: 23             print("[{}, {}] Loss: {}".format(epoch+1, i+1, train_loss)) 24     
25     total = 0 26     correct = 0 27     for data in test_loader: 28         inputs, labels = data 29         outputs = net(inputs) 30         _, preds = torch.max(outputs.data, 1) 31         
32         total += labels.size(0) 33         correct += (preds == labels).sum() 34 
35     print("Accuracy: {:.2f}%".format(100.0 * correct / total))

 

用loss_history列表record了所有的loss數據,此時調用matlab.pyplot包畫出loss曲線圖

1 import matplotlib.pyplot as plt 2 plt.plot(train_loss_history)

輸出如下:

[<matplotlib.lines.Line2D at 0x25be01fcdf0>]
 
          
 畫confusion matrix,計算評估指標
 1 all_dataset = TensorDataset(torch.Tensor(x_data),  2  torch.LongTensor(y_data))  3 all_loader = DataLoader(all_dataset, batch_size=BATCH_SIZE, shuffle=True)  4 #這里為了方便,將所有數據打包放入模型中訓練
 5 total=[]  6 correct=0  7 for data in all_loader:  8             inputs,labels = data  9             outputs = net(inputs) 10             _, preds = torch.max(outputs.data, 1) 11 
12  total .append(preds.tolist()) 13             #correct += (preds == labels).sum()
14 #將預測結果存入total這個列表中
15 total_down = [token for st in total for token in st] 16 
17 #畫confusion_matrix
18 from sklearn.metrics import confusion_matrix 19 cm = confusion_matrix(y_data, total_down) 20 sns.heatmap(cm, annot=True, fmt = "d", cmap = "Blues", annot_kws={"size": 20}, cbar = False) 21 plt.ylabel('True') 22 plt.xlabel('Predicted') 23 sns.set(font_scale = 2) 24 
25 acc=0 26 for i in range(208): 27     if y_data[i]==total_down[i]: 28         acc=acc+1
29 acc 30 TP=FN=FP=TN=0 31 for i in range(208): 32     if y_data[i]==1 and total_down[i]==1: 33         TN=TN+1
34     if y_data[i]==0 and total_down[i]==0: 35         TP=TP+1
36     if y_data[i]==1 and total_down[i]==0: 37         FP=FP+1
38     if y_data[i]==0 and total_down[i]==1: 39         FN=FN+1
40         
41 print("{} {} {} {}".format(TP,FP,FN,TN)) 42 
43 Accuracy= (TP+TN)/(TP+TN+FP+FN) 44 Precison = TP/(TP+FP) 45 Sensitivity = TP/(TP+FN) 46 Specificity = TN/(TN+FP) 47 print("Accuracy is:{} Precision is:{} Sensitivity is:{} Specificity is:{}".format(Accuracy,Precison,Sensitivity,Specificity)) 48 #計算評估指標
49 
50 print('總個數:{} 正確預測個數:{} 錯誤預測個數:{}'.format(TP+TN+FP+FN,TP+TN,FP+FN))

 

若采用Adam優化器,則代碼與結果如下:
 1 from torch import optim  2 net = MLP(in_dim=60, hid_dim1=540, hid_dim2=180,hid_dim3=30, out_dim=10)#調整了隱藏層參數
 3 optimizer = optim.Adam(params=net.parameters(), lr=0.001)#更換為Adam優化器
 4 criterion = nn.CrossEntropyLoss()  5 
 6 train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)  7 test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True)  8 train_loss_history = []  9 test_acc_history = [] 10 #Adam優化器迭代
11 for epoch in range(NUM_EPOCH): 12     
13     for i, data in enumerate(train_loader): 14         
15         inputs, labels = data 16         
17  optimizer.zero_grad() 18         outputs = net(inputs) 19                 
20         loss = criterion(outputs, labels) 21  loss.backward() 22         
23  optimizer.step() 24         
25         train_loss = loss.tolist() 26  train_loss_history.append(train_loss) 27         
28         if (i+1) % iters_per_epoch == 0: 29             print("[{}, {}] Loss: {}".format(epoch+1, i+1, train_loss)) 30     
31     total = 0 32     correct = 0 33     for data in test_loader: 34         inputs, labels = data 35         outputs = net(inputs) 36         _, preds = torch.max(outputs.data, 1) 37         
38         total += labels.size(0) 39         correct += (preds == labels).sum() 40 
41     print("Accuracy: {:.2f}%".format(100.0 * correct / total))
1 import matplotlib.pyplot as plt 2 plt.plot(train_loss_history)
[<matplotlib.lines.Line2D at 0x25be08b49d0>]
 
             
 
模型訓練完畢后,可通過將所有數據導入模型訓練得出Confusion Matrix以查看性能指標,根據自己的實際需求調整模型以達到更優化的性能。
這里僅貼上畫Adam模型的Matrix的代碼。中間過程請仿照上述代碼自行擬定。
1 #畫confusion_matrix
2 from sklearn.metrics import confusion_matrix 3 cm = confusion_matrix(y_data, total_down) 4 sns.heatmap(cm, annot=True, fmt = "d", cmap = "Blues", annot_kws={"size": 20}, cbar = False) 5 plt.ylabel('True') 6 plt.xlabel('Predicted') 7 sns.set(font_scale = 2)

Matrix如下:

 

 

 通過簡單計算得到Precision,Sensitivity,Accuracy,Specificity性能指標(與上面SGD相同)

輸出如下:

Accuracy is:0.6201923076923077  Precision is:0.6311475409836066  Sensitivity is:0.6936936936936937  Specificity is:0.5360824742268041

本模型采用IPython編寫,如用Pycharm等請自行刪除一些代碼。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM