學習筆記10:四種天氣識別(ImageFolder數據預處理、Dropout層、BN層)


相關包導入

import torch
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
import os
import shutil
%matplotlib inline

數據集預處理思路

四種天氣數據集的所有圖像放在同一個文件夾下,並以天氣類型和圖像序號為文件名
四種天氣分別是:cloudy、rain、shine、sunrise
ImageFolder可以處理train和test分別一個文件夾,然后每一類再各自一個文件夾的數據集
因此,我們首先需要做的就是建立文件夾,然后將相應的圖像拷貝進去
這里用到的兩個包是os和shutil,不用安裝,是自帶的

數據分區

首先我們需要做的是另外建立一個新的文件夾,這個文件夾下面有兩個文件夾,分別是train和test,代碼如下:

base_dir = r"E:/datasets2/29-42/29-42/dataset2/4weather"
if not os.path.isdir(base_dir):
    os.mkdir(base_dir)
    train_dir = os.path.join(base_dir, 'train')
    test_dir = os.path.join(base_dir, 'test')
    os.mkdir(train_dir)
    os.mkdir(test_dir)

然后,在train和test文件夾中分別建立以四種天氣類型命名的文件夾,代碼如下:

species = ['cloudy', 'rain', 'shine', 'sunrise']
for train_or_test in ['train', 'test']:
    for spec in species:
        os.mkdir(os.path.join(base_dir, train_or_test, spec))

最后,我們需要將原文件夾中的圖像拷貝到對應的文件夾下。
為了實現這一點,我們可以看看原來圖片的命名,通過命名確定分類
這里需要人為確定訓練集和測試集,比如說能被5整除的就是測試集,其他是訓練集

image_dir = r'E:\datasets2\29-42\29-42\dataset2\dataset2'
for i, img in enumerate(os.listdir(image_dir)):
    for spec in species:
        if spec in img:
            s = os.path.join(image_dir, img)
            if i % 5 == 0:
                d = os.path.join(base_dir, 'test', spec, img)
            else:
                d = os.path.join(base_dir, 'train', spec, img)
            shutil.copy(s, d)

操作完之后,我們可以查看一下各個文件夾中各有多少數據

for train_or_test in ['train', 'test']:
    for spec in species:
        print(train_or_test, spec, len(os.listdir(os.path.join(base_dir, train_or_test, spec))))

運行結果:

加載數據及數據預處理

transformation = transforms.Compose([
    transforms.Resize((96, 96)), # 改變圖像大小
    transforms.ToTensor(),
    transforms.Normalize(mean = [0.5, 0.5, 0.5], std = [0.5, 0.5, 0.5]) # 標准化
])

train_ds = datasets.ImageFolder(
    train_dir,
    transform = transformation
)

test_ds = datasets.ImageFolder(
    test_dir,
    transform = transformation
)

train_dl = torch.utils.data.DataLoader(train_ds, batch_size = 16, shuffle = True)
test_dl = torch.utils.data.DataLoader(test_ds, batch_size = 16)

這里注意一點是,數據的標簽ImageFolder已經自動處理了,如下圖所示:

模型定義與訓練

在這里新增加了兩個層,分別是Dropout層和BN層
Dropout層在訓練過程中,隨機使部分神經元失效。其作用: 1.取平均 2.減少神經元之間的共適應關系 3.類似於性別在生物進化中的角色

准化和歸一化;
歸一化:映射到(0, 1)區間
標准化:將數據減去其平均值使其中心為 0,然后將數據除以其標准差使其標准差為 1
批標准化:不僅在將數據輸入模型之前對數據做標准化,在網絡的每一次變換之后都應該考慮數據標准化
批標准化解決的問題是梯度消失與梯度爆炸,是一種訓練優化方法。
批標准化好處:具有正則化的效果、提高模型的泛化能力、允許更高的學習效率從而加速收斂
批標准化實現過程:1.求每一個訓練批次數據的均值 2.求每一個訓練批次數據的方程 3.數據進行標准化 4.訓練參數γ,β 5.輸出y通過γ和β的線性變換得到原來的數值

模型定義代碼如下:

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 16, 3)
        self.bn1 = nn.BatchNorm2d(16)
        self.pool = nn.MaxPool2d((2, 2))
        self.conv2 = nn.Conv2d(16, 32, 3)
        self.bn2 = nn.BatchNorm2d(32)
        self.conv3 = nn.Conv2d(32, 64, 3)
        self.bn3 = nn.BatchNorm2d(64)
        self.drop = nn.Dropout(0.5)
        self.linear_1 = nn.Linear(64 * 10 * 10, 1024)
        self.bn_l1 = nn.BatchNorm1d(1024)
        self.linear_2 = nn.Linear(1024, 256)
        self.bn_l2 = nn.BatchNorm1d(256)
        self.linear_3 = nn.Linear(256, 4)
    def forward(self, input):
        x = F.relu(self.conv1(input))
        x = self.pool(x)
        x = self.bn1(x)
        x = F.relu(self.conv2(x))
        x = self.pool(x)
        x = self.bn2(x)
        x = F.relu(self.conv3(x))
        x = self.pool(x)
        x = self.bn3(x)
        # print(x.size())
        x = x.view(-1, 64 * 10 * 10)
        x = F.relu(self.linear_1(x))
        x = self.bn_l1(x)
        x = self.drop(x)
        x = F.relu(self.linear_2(x))
        x = self.bn_l2(x)
        x = self.drop(x)
        x = self.linear_3(x)
        return x

這里需要注意的是各個層的位置,BN層放在池化層后面,以激活層和Dropout層之間

模型訓練

loss_func = torch.nn.CrossEntropyLoss()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def fit(epoch, model, trainloader, testloader):
    correct = 0
    total = 0
    running_loss = 0
    
    model.train()  # 訓練階段
    for x, y in trainloader:
        x, y = x.to(device), y.to(device)
        y_pred = model(x)
        loss = loss_func(y_pred, y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        with torch.no_grad():
            y_pred = torch.argmax(y_pred, dim = 1)
            correct += (y_pred == y).sum().item()
            total += y.size(0)
            running_loss += loss.item()

    epoch_acc = correct / total
    epoch_loss = running_loss / len(trainloader.dataset)
    
    test_correct = 0
    test_total = 0
    test_running_loss = 0
    
    model.eval() # 評價階段,一般在有dropout層和BN層的時候使用
    with torch.no_grad():
        for x, y in testloader:
            x, y = x.to(device), y.to(device)
            y_pred = model(x)
            loss = loss_func(y_pred, y)
            y_pred = torch.argmax(y_pred, dim = 1)
            test_correct += (y_pred == y).sum().item()
            test_total += y.size(0)
            test_running_loss += loss.item()
    epoch_test_acc = test_correct / test_total
    epoch_test_loss = test_running_loss / len(testloader.dataset)
    
    print('epoch: ', epoch, 
          'loss: ', round(epoch_loss, 3),
          'accuracy: ', round(epoch_acc, 3),
          'test_loss: ', round(epoch_test_loss, 3),
          'test_accuracy: ', round(epoch_test_acc, 3))
    
    return epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc

這里需要注意的是,要區分訓練階段和評價階段,一般在有Dropout層和BN層的時候使用

model = Model()
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr = 0.001)
epochs = 30
train_loss = []
train_acc = []
test_loss = []
test_acc = []
for epoch in range(epochs):
    epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc = fit(epoch, model, train_dl, test_dl)
    train_loss.append(epoch_loss)
    train_acc.append(epoch_acc)
    test_loss.append(epoch_test_loss)
    test_acc.append(epoch_test_acc)

訓練結果


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM