Pytorch學習筆記(二)---- 神經網絡搭建


記錄如何用Pytorch搭建LeNet-5,大體步驟包括:網絡的搭建->前向傳播->定義Loss和Optimizer->訓練

# -*- coding: utf-8 -*-
# All codes and comments from <<深度學習框架Pytorch入門與實踐>>
# Code url : https://github.com/zhouzhoujack/pytorch-book
# lesson_2 : Neural network of PT(Pytorch)

# torch.nn是專門為神經網絡設計的模塊化接口,nn構建於 Autograd之上,可用來定義和運行神經網絡
# 定義網絡時,需要繼承nn.Module,並實現它的forward方法,把網絡中具有可學習參數的層放在構造函數__init__中
# 下面是LeNet-5網絡結構

import torch as t
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        # nn.Module子類的函數必須在構造函數中執行父類的構造函數
        # 下式等價於nn.Module.__init__(self)
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)             # 卷積層'1'表示輸入圖片為單通道, '6'表示輸出通道數,'5'表示卷積核為5*5
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(in_features=16 * 5 * 5, out_features=120, bias=True)       # 全連接層,y = x*transposition(A) + b
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = F.max_pool2d(input=F.relu(self.conv1(x)), kernel_size=(2, 2))        # 卷積 -> 激活 -> 池化
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        # view函數只能由於contiguous的張量上,就是在內存中連續存儲的張量,當tensor之前調用了transpose,
        # permute函數就會是tensor內存中變得不再連續,就不能調用view函數。
        # tensor.view() = np.reshape()
        x = x.view(x.size()[0], -1)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x
"""
Net(
  (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  (fc1): Linear(in_features=400, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)
"""
net = Net()

# 網絡的可學習參數通過net.parameters()返回,net.named_parameters可同時返回可學習的參數及名稱
"""
conv1.weight : torch.Size([6, 1, 5, 5])
conv1.bias : torch.Size([6])
conv2.weight : torch.Size([16, 6, 5, 5])
conv2.bias : torch.Size([16])
fc1.weight : torch.Size([120, 400])
fc1.bias : torch.Size([120])
fc2.weight : torch.Size([84, 120])
fc2.bias : torch.Size([84])
fc3.weight : torch.Size([10, 84])
fc3.bias : torch.Size([10])
"""
# parameters infomation of network
# params = list(net.parameters())
# for name,parameters in net.named_parameters():
#     print(name,':',parameters.size())

if __name__ == '__main__':
    """
    計算圖如下:
    input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d  
          -> view -> linear -> relu -> linear -> relu -> linear 
          -> MSELoss
          -> loss

    """
    input = t.randn(1, 1, 32, 32)
    output = net(input)
    # >>torch.arange(1., 4.)
    # >>1 2 3 [torch.FloatTensor of size 3]
    # if missing . , the type of torch will change to int
    target = t.arange(0., 10.).view(1, 10)
    criterion = nn.MSELoss()
    loss = criterion(output, target)
    print(loss)

    # 運行.backward,觀察調用之前和調用之后的grad
    net.zero_grad()  # 把net中所有可學習參數的梯度清零
    print('反向傳播之前 conv1.bias的梯度')
    print(net.conv1.bias.grad)
    loss.backward()
    print('反向傳播之后 conv1.bias的梯度')
    print(net.conv1.bias.grad)

    # Optimizer
    # torch.optim中實現了深度學習中絕大多數的優化方法,例如RMSProp、Adam、SGD等
    # 在反向傳播計算完所有參數的梯度后,還需要使用優化方法來更新網絡的權重和參數,例如隨機梯度下降法(SGD)的更新策略如下:
    # weight = weight - learning_rate * gradient
    optimizer = optim.SGD(net.parameters(), lr=0.01)
    # 在訓練過程中
    # 先梯度清零(與net.zero_grad()效果一樣)
    optimizer.zero_grad()
    # 計算損失
    output = net(input)
    loss = criterion(output, target)
    # 反向傳播
    loss.backward()
    # 更新參數
    optimizer.step()

nn.Conv2d()詳解

torch.nn.Conv2d(in_channels, 	# input channels
                out_channels, 	# output channels
                kernel_size, 	# conv kernel size
                stride=1, 		
                padding=0, 		# add the number of zeros per dimension
                dilation=1, 
                groups=1, 
                bias=True		# default=True
               )

其中Conv2d 的輸入 input 尺寸為
,輸出 output 尺寸為

Feature Map 大小計算

Size of Feature Map = (W - F + 2P)/S + 1

W : 輸入圖像尺寸寬度

F : 卷積核寬度

P:邊界填充0數量

S:滑動步長

例如:

輸入(227,227,3)

卷積層 kernel_size = 11

stride = 4

padding = 0

n(卷積核數量) = 96

輸出 (55,55,96)

(227 - 11 + 0) /4 +1 = 55


參考資料

nn.Conv2d()詳解:https://www.aiuai.cn/aifarm618.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM