【實作】CNN-food11


Food-11 實驗筆記

數據介紹

食物類別:

Bread, Dairy product, Dessert, Egg, Fried food, Meat, Noodles/Pasta, Rice, Seafood, Soup, and Vegetable/Fruit.
面包,乳制品,甜點,雞蛋,油炸食品,肉類,面條/意大利面,米飯,海鮮,湯,蔬菜/水果。
  0 ,1 , 2,  3 ,   4  ,   5  ,   6   ,   7  ,   8   ,    9     ,   10

其中,

Training set: 9866張
Validation set: 3430張
Testing set: 3347張

 

實現過程

1)預處理數據:

  cv2讀數據,resize數據成128*128。做成Dataset形式,放到DataLoader中(方便shuffle、切batch)

2)模型:

  CNN接FC。其中CNN的結構如下:

class Classifier(nn.Module):
    def __init__(self):
        # super繼承父類
        super(Classifier, self).__init__()
        # conv2d(in_channels, out_channels, kernel_size, stride, padding)
        # MaxPool2d(kernel_size, stride, padding)
        
        # input維度[3,128,128]
        self.cnn = nn.Sequential(
            nn.Conv2d(3, 64, 3, 1, 1), #[64, 128, 128]
            nn.BatchNorm2d(64),
            nn.ReLU(),
            nn.MaxPool2d(2, 2, 0), #[64, 64, 64]
            
            nn.Conv2d(64, 128, 3, 1, 1),#[128, 64, 64]
            nn.BatchNorm2d(128),
            nn.ReLU(),
            nn.MaxPool2d(2, 2, 0), #[128, 32, 32]
            
            nn.Conv2d(128, 256, 3, 1, 1),#[256, 32, 32]
            nn.BatchNorm2d(256),
            nn.ReLU(),
            nn.MaxPool2d(2, 2, 0),#[256, 16, 16]
            
            nn.Conv2d(256, 512, 3, 1, 1),#[512, 16, 16]
            nn.BatchNorm2d(512),
            nn.ReLU(),
            nn.MaxPool2d(2, 2, 0),#[512, 8, 8]
            
            nn.Conv2d(512, 512, 3, 1, 1),#[512, 8, 8]
            nn.BatchNorm2d(512),
            nn.ReLU(),
            nn.MaxPool2d(2, 2, 0),#[512, 4, 4]
        )
        
        self.fc = nn.Sequential(
            nn.Linear(512*4*4, 1024),
            nn.ReLU(),
            nn.Linear(1024, 512),
            nn.Linear(512, 11)
        )
    
    def forward(self, x):
        out = self.cnn(x)
        out = out.view(out.size()[0], -1)
        return self.fc(out)
View Code

3)訓練:

  對每個epoch,model train模式下,訓練。model eval模式下,驗證。不斷調整超參。找到相對來說最好的參數后,將train與val數據集concatenate在一起再一起進行訓練。將得到的模型保存下來。

4)測試:

  把測試集丟到保存的模型中,並保存訓練結果。

 

實現代碼

本實驗是台大李宏毅老師機器學習2020年的HW3

作業說明如下:

https://github.com/ziyeZzz/lhy_DL_Hw/blob/master/hw3_slides.pptx

code的official ref如下:

https://github.com/ziyeZzz/lhy_DL_Hw/blob/master/hw3_CNN.ipynb

自整理的code的ref如下:

https://github.com/ziyeZzz/machine_learning/blob/master/classification/Lihongyi-hw3-CNN-food11/hw3_food_classification.ipynb

 

調整過程

按原始official的實現方法,val acc可達69%

測試acc達到69%

[059/3000] 259.75 sec(s) Train Acc: 0.951449 Loss: 0.001090 | Val Acc: 0.696501 loss: 0.012728
[060/3000] 260.27 sec(s) Train Acc: 0.980134 Loss: 0.000519 | Val Acc: 0.686297 loss: 0.013414
View Code

 

測試1)如不使用transform
結果:會很快過擬合。測試acc達65%

[029/3000] 11.93 sec(s) Train Acc: 0.993108 Loss: 0.000260 | Val Acc: 0.636443 loss: 0.016154
[030/3000] 11.90 sec(s) Train Acc: 0.994831 Loss: 0.000174 | Val Acc: 0.634694 loss: 0.016754
[031/3000] 11.88 sec(s) Train Acc: 0.965234 Loss: 0.000840 | Val Acc: 0.588630 loss: 0.017759
[032/3000] 11.92 sec(s) Train Acc: 0.948713 Loss: 0.001185 | Val Acc: 0.552187 loss: 0.023395
[033/3000] 11.83 sec(s) Train Acc: 0.938476 Loss: 0.001462 | Val Acc: 0.612536 loss: 0.017101
[034/3000] 11.93 sec(s) Train Acc: 0.957125 Loss: 0.001032 | Val Acc: 0.598834 loss: 0.019387
[035/3000] 11.94 sec(s) Train Acc: 0.975066 Loss: 0.000582 | Val Acc: 0.647813 loss: 0.016691
[036/3000] 12.26 sec(s) Train Acc: 0.993513 Loss: 0.000197 | Val Acc: 0.646356 loss: 0.017769
[037/3000] 11.82 sec(s) Train Acc: 0.997567 Loss: 0.000124 | Val Acc: 0.523324 loss: 0.027722
[038/3000] 11.81 sec(s) Train Acc: 0.916988 Loss: 0.002157 | Val Acc: 0.520700 loss: 0.022304
[039/3000] 11.84 sec(s) Train Acc: 0.925502 Loss: 0.001917 | Val Acc: 0.603207 loss: 0.018548
[040/3000] 11.95 sec(s) Train Acc: 0.981756 Loss: 0.000470 | Val Acc: 0.621574 loss: 0.017166
[041/3000] 11.89 sec(s) Train Acc: 0.998480 Loss: 0.000076 | Val Acc: 0.654519 loss: 0.015744
View Code

 

測試2)transform時,把圖片歸一化到[-1,1],現在是[0,1]
實現方法:在transform中
transforms.ToTensor(), # 將圖片轉成tensor,並把數值normalize到[0,1]
transforms.Normalize(mean = (0.5, 0.5, 0.5), std = (0.5, 0.5, 0.5)),

[095/100] 15.06 sec(s) Train Acc: 0.981046 Loss: 0.000468 | Val Acc: 0.233236 loss: 0.060886
[096/100] 14.73 sec(s) Train Acc: 0.986317 Loss: 0.000316 | Val Acc: 0.262391 loss: 0.053779
[097/100] 14.72 sec(s) Train Acc: 0.991689 Loss: 0.000203 | Val Acc: 0.238192 loss: 0.062459
[098/100] 14.66 sec(s) Train Acc: 0.990979 Loss: 0.000229 | Val Acc: 0.239942 loss: 0.062005
[099/100] 14.59 sec(s) Train Acc: 0.995033 Loss: 0.000157 | Val Acc: 0.229446 loss: 0.070151
[100/100] 14.76 sec(s) Train Acc: 0.982870 Loss: 0.000433 | Val Acc: 0.173178 loss: 0.098251
Best acc 0.23

val的結果如上所示,變得奇差無比,why????
我發現因為我val數據集沒normalize到[-1,1]間,可能因為train和val數據分布不一致導致。那么對val也操作后:
果然好多了。。。。val的acc甚至可以達到70%

[045/100] 15.03 sec(s) Train Acc: 0.912528 Loss: 0.001948 | Val Acc: 0.665598 loss: 0.012324
[046/100] 15.29 sec(s) Train Acc: 0.894283 Loss: 0.002369 | Val Acc: 0.686297 loss: 0.010362
[047/100] 23.08 sec(s) Train Acc: 0.926110 Loss: 0.001675 | Val Acc: 0.655394 loss: 0.013134
[048/100] 24.21 sec(s) Train Acc: 0.941618 Loss: 0.001384 | Val Acc: 0.684548 loss: 0.012320
[049/100] 22.94 sec(s) Train Acc: 0.916278 Loss: 0.001910 | Val Acc: 0.638484 loss: 0.013741
[050/100] 19.86 sec(s) Train Acc: 0.931989 Loss: 0.001593 | Val Acc: 0.650437 loss: 0.013430
[051/100] 15.32 sec(s) Train Acc: 0.944456 Loss: 0.001276 | Val Acc: 0.665889 loss: 0.013202
[052/100] 15.09 sec(s) Train Acc: 0.947395 Loss: 0.001189 | Val Acc: 0.675510 loss: 0.012964
[053/100] 14.99 sec(s) Train Acc: 0.956416 Loss: 0.001025 | Val Acc: 0.635277 loss: 0.015925
[054/100] 15.16 sec(s) Train Acc: 0.932191 Loss: 0.001506 | Val Acc: 0.638484 loss: 0.016541
[055/100] 15.14 sec(s) Train Acc: 0.935029 Loss: 0.001551 | Val Acc: 0.676385 loss: 0.013396
[056/100] 15.22 sec(s) Train Acc: 0.951348 Loss: 0.001114 | Val Acc: 0.684840 loss: 0.012859
[057/100] 15.11 sec(s) Train Acc: 0.930570 Loss: 0.001630 | Val Acc: 0.644606 loss: 0.016157
[058/100] 15.23 sec(s) Train Acc: 0.965437 Loss: 0.000842 | Val Acc: 0.691254 loss: 0.013474
[059/100] 15.89 sec(s) Train Acc: 0.946382 Loss: 0.001232 | Val Acc: 0.667638 loss: 0.013778
[060/100] 17.95 sec(s) Train Acc: 0.968984 Loss: 0.000699 | Val Acc: 0.704665 loss: 0.012827
[061/100] 18.07 sec(s) Train Acc: 0.982060 Loss: 0.000444 | Val Acc: 0.694752 loss: 0.013444
[062/100] 18.07 sec(s) Train Acc: 0.976079 Loss: 0.000604 | Val Acc: 0.661516 loss: 0.018054
Best Acc 0.7

那么問題來了,什么時候把圖片歸一化到[0,1]之間,什么時候又需要歸一化到[-1,1]之間呢?


測試3)convert成RGB會不會變好一些呢?

因為cv2默認讀出來是BGR,顯示圖片會泛藍。改成RGB的方法:

#在用cv2讀出圖片后,加上[2,1,0]把index倒過來
img = cv2.imread(os.path.join(data_dir, file))
img = img[:,:,[2,1,0]]
View Code

transform到[-1,1]之間后的結果

[050/100] 15.41 sec(s) Train Acc: 0.921650 Loss: 0.001845 | Val Acc: 0.695335 loss: 0.011956
[051/100] 15.33 sec(s) Train Acc: 0.944050 Loss: 0.001387 | Val Acc: 0.649563 loss: 0.014059
Best Acc 0.7

可以看到差不多50步的時候,就已經是70%的acc了。所以轉不轉RGB也並沒有什么要緊的。因為已經做過BN了

 

測試4)畫混淆矩陣看一下,是哪些類別容易弄錯呢

import seaborn as sns
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt

sns.set()
f,ax=plt.subplots()
y_true = [0,0,1,2,1,2,0,2,2,0,1,1]
y_pred = [1,0,1,2,1,0,0,2,2,0,1,1]
C2= confusion_matrix(y_true, y_pred, labels=[0, 1, 2])
print(C2) #打印出來看看
sns.heatmap(C2,annot=True,ax=ax) #畫熱力圖

# 在實際代碼中
# 在epoch循環中
if epoch==49:
                val_pred_y += val_pred
                val_real_y += data[1]

# 數據處理成數組形式
val_real = [int(y.numpy()) for y in val_real_y]
val_pred = [np.argmax(y.cpu().data.numpy()) for y in val_pred_y]

# 然后開始畫圖
sns.set()
f,ax=plt.subplots(figsize = (12, 10))
C2= confusion_matrix(val_real, val_pred, labels=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
#print(C2) #打印出來看看
sns.heatmap(C2,annot=True,ax=ax,linewidth=.5,cmap='YlGnBu') #畫熱力圖

ax.set_title('confusion matrix') #標題
ax.set_xlabel('predict') #x軸
ax.set_ylabel('true') #y軸

#畫出來就是下圖所示啦
View Code

 可以發現:
海鮮和肉類,面包和甜點、油炸食品等有重合。

這幾類看圖片確實會有一些重合。

 

通過上面幾個嘗試都可以看出,訓練集是過擬合了。


測試5)加上dropout試一下
加了dropout后確實變好了,val的acc可到75%左右

[482/500] 15.93 sec(s) Train Acc: 0.994932 Loss: 0.000161 | Val Acc: 0.734402 loss: 0.013944
[483/500] 16.05 sec(s) Train Acc: 0.995844 Loss: 0.000114 | Val Acc: 0.748105 loss: 0.013561
[484/500] 16.03 sec(s) Train Acc: 0.996452 Loss: 0.000100 | Val Acc: 0.757726 loss: 0.013194
[485/500] 16.10 sec(s) Train Acc: 0.997162 Loss: 0.000073 | Val Acc: 0.744023 loss: 0.015371
Best Acc 0.75

目前實現方法:

在CNN層nn.ReLU前加dropout,然后FC層也加dropout,不過比例不同。CNN層dropout的比例更少(0.1),FC層更多(0.3)。因為FC層參數更多。

# 在CNN層ReLU前加dropout 
self.cnn = nn.Sequential(
            nn.Conv2d(3, 64, 3, 1, 1), #[64, 128, 128]
            nn.BatchNorm2d(64),
            nn.Dropout(0.1),
            nn.ReLU(),
            nn.MaxPool2d(2, 2, 0), #[64, 64, 64],
            .....,
            .....,
            )

# 在FC層
 self.fc = nn.Sequential(
            nn.Linear(512*4*4, 1024),
            nn.Dropout(0.3),
            nn.ReLU(),
            nn.Linear(1024, 512),
            nn.Dropout(0.3),
            nn.Linear(512, 11)
        )
View Code

網上查的說,一般都是在FC層用dropout。一般不用於卷積層,因為在卷積層中圖像相鄰像素共享很多相同信息,如某些被刪除,它們包含的信息仍可通過其他仍活動的相鄰像素傳遞。簡而言之,就是在CNN層加了也沒用。

所以卷積層中的dropout只是增加了對噪聲輸入的魯棒性,而不是在全連接層中觀察到的模型平均效果。

 

測試6)那么試一下只在FC層加dropout

[496/500] 15.32 sec(s) Train Acc: 0.994628 Loss: 0.000129 | Val Acc: 0.750729 loss: 0.022423
[497/500] 15.36 sec(s) Train Acc: 0.996351 Loss: 0.000109 | Val Acc: 0.753936 loss: 0.021338
[498/500] 15.27 sec(s) Train Acc: 0.995642 Loss: 0.000103 | Val Acc: 0.748980 loss: 0.022345
[499/500] 15.12 sec(s) Train Acc: 0.996452 Loss: 0.000098 | Val Acc: 0.756268 loss: 0.022865
[500/500] 15.15 sec(s) Train Acc: 0.995642 Loss: 0.000205 | Val Acc: 0.756851 loss: 0.021429
View Code

效果和在CNN中也加了dropout的差不多。看來在CNN中加或不加,影響確實不大。

 

測試7)Dropout的超參數及位置再調整

兩個ReLu的比例都設成0.5,或只保留在ReLu前的dropout,並把比例設為0.6

self.fc = nn.Sequential(
            nn.Linear(512*4*4, 1024),
            nn.Dropout(0.6),
            nn.ReLU(),
            nn.Linear(1024, 512),
            nn.Linear(512, 11)
        )
View Code

val差不多都是74%左右,也沒有啥提升。

 

一般來說減少過擬合的步驟

  1. 添加更多數據
  2. 使用數據增強
  3. 使用泛化性能更佳的模型結構
  4. 添加正規化(多數情況下是 Dropout,L1 / L2正則化也有可能)
  5. 降低模型復雜性

那再試試第五個方法吧,降低模型復雜度試試:

-----------------

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM