深度學習與Pytorch入門實戰(五)分類問題以及優化【數字識別實例】


筆記摘抄

1. 分類問題

1.1 二分類

  • \(f:x\rightarrow p(y=1|x)\)

    • \(p(y=1|x)\): 解釋成給定x,求y=1的概率,如果概率>0.5,預測為1;否則,預測為0

    • \(p_{\theta}(y|x)\):給定x,輸出預測值的概率

    • \(p_{r}(y|x)\):給定x,真實分布

1.2 多分類

  • \(f:x\rightarrow p(y|x)\)

    • \([p(y=0|x),p(y=1|x),...,p(y=9|x)]\)
  • \(p(y|x)\epsilon [0,1]\)

  • \(\sum_{i=0}^{9}p(y=i|x)=1\)

\[p_i = \frac{e^{a_i}}{\sum_{k=1}^{N} e^{a_k}} \]

2. 交叉熵

2.1 信息熵

  • 描述一個隨機事件的不確定性。

\[H(p)=-\sum _{x\epsilon X}p(x)logp(x) \]

  • 描述一個分布,熵越高,隨機變量的信息越多。
import torch

a = torch.full([4],1/4.)
print(-(a*torch.log2(a)).sum())               # tensor(2.)

b = torch.tensor([0.1,0.1,0.1,0.7])
print(-(b*torch.log2(b)).sum())               # tensor(1.3568)

c = b = torch.tensor([0.001,0.001,0.001,0.999])
print(-(c*torch.log2(c)).sum())               # tensor(0.0313)

2.2 交叉熵

  • 公式:

\[H(p,q)=-\sum _{x\epsilon X}p(x)logq(x) \]

\[H(p)=-\sum _{x\epsilon X}p(x)logp(x) \]

\[D_{KL}(p|q) = H(p,q) - H(p) \]

  • KL散度 = 交叉熵H(p,q) - 信息熵H(p),用 分布q 來模擬 真實分布p 所需的額外信息。

  • p = q,H(p,q) = H(p)

  • 對one-hot Encoding來說,entropy = H(p) = 1log1 = 0

2.3 二分類問題的交叉熵

  • P(i)指i的真實值,Q(i)指i的預測值。

\(H(p, q) = -\sum _{i\epsilon cat,dog}P(i)logQ(i)\)

\(H(p, q) = -P(cat)logQ(cat) - P(dog)logQ(dog)\)

\(H(p, q) = -\sum _{i=1}^{n}y_ilog(p_i)+(1-y_i)log(1-p_i)\)

import torch
from torch.nn import functional as F

x = torch.randn(1,784)
w = torch.randn(10,784)
logits = x@w.t()                                      # shape=torch.Size([1,10])

# 方法1:推薦
# pytorch中cross_entropy已經經過了softma+log+nll_loss,所以這里傳入logits
# 參數: (predict, label)
print(F.cross_entropy(logits, torch.tensor([3])))     # tensor(77.1405)

# 方法2:容易計算錯
# 如果一定要自己計算softmax+log
pred = F.softmax(logits,dim=1)                        # shape=torch.Size([1,10])
pred_log = torch.log(pred)

print(F.nll_loss(pred_log, torch.tensor([3])))        # tensor(77.1405)

3. 多分類實戰

  • 識別手寫數據集
import  torch
import  torch.nn as nn
import  torch.nn.functional as F
import  torch.optim as optim
from    torchvision import datasets, transforms

#超參數
batch_size=200
learning_rate=0.01
epochs=10

#獲取訓練集
train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('../data', train=True, download=True,          #train=True則得到的是訓練集
                   transform=transforms.Compose([                 #transform進行數據預處理
                       transforms.ToTensor(),                     #轉成Tensor類型的數據
                       transforms.Normalize((0.1307,), (0.3081,)) #進行數據標准化(減去均值除以方差)
                   ])),
    batch_size=batch_size, shuffle=True)                          #按batch_size分出一個batch維度在最前面,shuffle=True打亂順序



#獲取測試集
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('../data', train=False, transform=transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.1307,), (0.3081,))
    ])),
    batch_size=batch_size, shuffle=True)

#設定參數w和b
w1, b1 = torch.randn(200, 784, requires_grad=True),\
         torch.zeros(200, requires_grad=True)             #w1(out,in)
w2, b2 = torch.randn(200, 200, requires_grad=True),\
         torch.zeros(200, requires_grad=True)
w3, b3 = torch.randn(10, 200, requires_grad=True),\
         torch.zeros(10, requires_grad=True)

torch.nn.init.kaiming_normal_(w1)
torch.nn.init.kaiming_normal_(w2)
torch.nn.init.kaiming_normal_(w3)


def forward(x):
    x = x@w1.t() + b1
    x = F.relu(x)
    x = x@w2.t() + b2
    x = F.relu(x)
    x = x@w3.t() + b3
    x = F.relu(x)
    return x


#定義sgd優化器,指明優化參數、學習率
optimizer = optim.SGD([w1, b1, w2, b2, w3, b3], lr=learning_rate)
criteon = nn.CrossEntropyLoss()

for epoch in range(epochs):

    for batch_idx, (data, target) in enumerate(train_loader):
        data = data.view(-1, 28*28)          #將二維的圖片數據攤平[樣本數,784]

        logits = forward(data)               #前向傳播
        loss = criteon(logits, target)       #nn.CrossEntropyLoss()自帶Softmax

        optimizer.zero_grad()                #梯度信息清空
        loss.backward()                      #反向傳播獲取梯度
        optimizer.step()                     #優化器更新

        if batch_idx % 100 == 0:             #每100個batch輸出一次信息
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                       100. * batch_idx / len(train_loader), loss.item()))


    test_loss = 0
    correct = 0                                         #correct記錄正確分類的樣本數
    for data, target in test_loader:
        data = data.view(-1, 28 * 28)
        logits = forward(data)
        test_loss += criteon(logits, target).item()     #其實就是criteon(logits, target)的值,標量

        pred = logits.data.max(dim=1)[1]                #也可以寫成pred=logits.argmax(dim=1)
        correct += pred.eq(target.data).sum()

    test_loss /= len(test_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))
view result
Train Epoch: 0 [0/60000 (0%)]	Loss: 2.551489
Train Epoch: 0 [20000/60000 (33%)]	Loss: 0.937205
Train Epoch: 0 [40000/60000 (67%)]	Loss: 0.664578

Test set: Average loss: 0.0030, Accuracy: 8060/10000 (81%)

Train Epoch: 1 [0/60000 (0%)]	Loss: 0.594552
Train Epoch: 1 [20000/60000 (33%)]	Loss: 0.534821
Train Epoch: 1 [40000/60000 (67%)]	Loss: 0.676503

Test set: Average loss: 0.0026, Accuracy: 8277/10000 (83%)

Train Epoch: 2 [0/60000 (0%)]	Loss: 0.393263
Train Epoch: 2 [20000/60000 (33%)]	Loss: 0.424480
Train Epoch: 2 [40000/60000 (67%)]	Loss: 0.560588

Test set: Average loss: 0.0024, Accuracy: 8359/10000 (84%)

Train Epoch: 3 [0/60000 (0%)]	Loss: 0.559309
Train Epoch: 3 [20000/60000 (33%)]	Loss: 0.547236
Train Epoch: 3 [40000/60000 (67%)]	Loss: 0.537494

Test set: Average loss: 0.0023, Accuracy: 8423/10000 (84%)

Train Epoch: 4 [0/60000 (0%)]	Loss: 0.549808
Train Epoch: 4 [20000/60000 (33%)]	Loss: 0.405319
Train Epoch: 4 [40000/60000 (67%)]	Loss: 0.368419

Test set: Average loss: 0.0022, Accuracy: 8477/10000 (85%)

Train Epoch: 5 [0/60000 (0%)]	Loss: 0.371384
Train Epoch: 5 [20000/60000 (33%)]	Loss: 0.409493
Train Epoch: 5 [40000/60000 (67%)]	Loss: 0.354021

Test set: Average loss: 0.0021, Accuracy: 8523/10000 (85%)

Train Epoch: 6 [0/60000 (0%)]	Loss: 0.448938
Train Epoch: 6 [20000/60000 (33%)]	Loss: 0.439384
Train Epoch: 6 [40000/60000 (67%)]	Loss: 0.476088

Test set: Average loss: 0.0020, Accuracy: 8548/10000 (85%)

Train Epoch: 7 [0/60000 (0%)]	Loss: 0.401981
Train Epoch: 7 [20000/60000 (33%)]	Loss: 0.405808
Train Epoch: 7 [40000/60000 (67%)]	Loss: 0.492355

Test set: Average loss: 0.0020, Accuracy: 8575/10000 (86%)

Train Epoch: 8 [0/60000 (0%)]	Loss: 0.385034
Train Epoch: 8 [20000/60000 (33%)]	Loss: 0.367822
Train Epoch: 8 [40000/60000 (67%)]	Loss: 0.333447

Test set: Average loss: 0.0020, Accuracy: 8593/10000 (86%)

Train Epoch: 9 [0/60000 (0%)]	Loss: 0.349438
Train Epoch: 9 [20000/60000 (33%)]	Loss: 0.390028
Train Epoch: 9 [40000/60000 (67%)]	Loss: 0.390438

Test set: Average loss: 0.0019, Accuracy: 8604/10000 (86%)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM