深度學習(pytorch)-1.基於簡單神經網絡的圖片自動分類


這是pytorch官方的一個例子

官方教程地址:http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py

代碼如下

 1 # coding=utf-8
 2 import torch.nn as nn
 3 import torch.nn.functional as F
 4 from torch.autograd import Variable
 5 import torch
 6 import torchvision
 7 import torchvision.transforms as transforms
 8 import torch.optim as optim
 9 
10 # The output of torchvision datasets are PILImage images of range [0, 1].
11 # We transform them to Tensors of normalized range [-1, 1]
12 transform = transforms.Compose([transforms.ToTensor(),
13                                 transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
14                                 ])
15 
16 # 訓練集,將相對目錄./data下的cifar-10-batches-py文件夾中的全部數據(50000張圖片作為訓練數據)加載到內存中,若download為True時,會自動從網上下載數據並解壓
17 trainset = torchvision.datasets.CIFAR10(root=r'E:\Face Recognition\cifar-10-python', train=True, download=False, transform=transform)
18 
19 # 將訓練集的50000張圖片划分成12500份,每份4張圖,用於mini-batch輸入。shffule=True在表示不同批次的數據遍歷時,打亂順序。num_workers=2表示使用兩個子進程來加載數據
20 trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
21                                           shuffle=True)
22 
23 # 測試集,將相對目錄./data下的cifar-10-batches-py文件夾中的全部數據(10000張圖片作為測試數據)加載到內存中,若download為True時,會自動從網上下載數據並解壓
24 testset = torchvision.datasets.CIFAR10(root=r'E:\Face Recognition\cifar-10-python', train=False, download=False, transform=transform)
25 
26 # 將測試集的10000張圖片划分成2500份,每份4張圖,用於mini-batch輸入。
27 testloader = torch.utils.data.DataLoader(testset, batch_size=4,
28                                          shuffle=False)
29 classes = ('plane', 'car', 'bird', 'cat',
30            'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
31 
32 
33 class Net(nn.Module):
34     def __init__(self):
35         super(Net, self).__init__()
36         self.conv1 = nn.Conv2d(3, 6, 5)  # 定義conv1函數的是圖像卷積函數:輸入為圖像(3個頻道,即彩色圖),輸出為6張特征圖, 卷積核為5x5正方形
37         self.pool = nn.MaxPool2d(2, 2)
38         self.conv2 = nn.Conv2d(6, 16, 5)
39         self.fc1 = nn.Linear(16 * 5 * 5, 120)
40         self.fc2 = nn.Linear(120, 84)
41         self.fc3 = nn.Linear(84, 10)
42 
43     def forward(self, x):
44         x = self.pool(F.relu(self.conv1(x)))
45         x = self.pool(F.relu(self.conv2(x)))
46         x = x.view(-1, 16 * 5 * 5)
47         x = F.relu(self.fc1(x))
48         x = F.relu(self.fc2(x))
49         x = self.fc3(x)
50         return x
51 
52 
53 net = Net()
54 
55 criterion = nn.CrossEntropyLoss()  # 叉熵損失函數
56 optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)  # 使用SGD(隨機梯度下降)優化,學習率為0.001,動量為0.9
57 
58 for epoch in range(10):  # 遍歷數據集兩次
59 
60     running_loss = 0.0
61     # enumerate(sequence, [start=0]),i序號,data是數據
62     for i, data in enumerate(trainloader, 0):
63         # get the inputs
64         inputs, labels = data  # data的結構是:[4x3x32x32的張量,長度4的張量]
65 
66         # wrap them in Variable
67         inputs, labels = Variable(inputs), Variable(labels)  # 把input數據從tensor轉為variable
68 
69         # zero the parameter gradients
70         optimizer.zero_grad()  # 將參數的grad值初始化為0
71 
72         # forward + backward + optimize
73         outputs = net(inputs)
74         loss = criterion(outputs, labels)  # 將output和labels使用叉熵計算損失
75         loss.backward()  # 反向傳播
76         optimizer.step()  # 用SGD更新參數
77 
78         # 每2000批數據打印一次平均loss值
79         running_loss += loss.data[0]  # loss本身為Variable類型,所以要使用data獲取其Tensor,因為其為標量,所以取0
80         if i % 2000 == 1999:  # 每2000批打印一次
81             print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000))
82             running_loss = 0.0
83 
84 print('Finished Training')
85 
86 correct = 0
87 total = 0
88 for data in testloader:
89     images, labels = data
90     outputs = net(Variable(images))
91     # print outputs.data
92     _, predicted = torch.max(outputs.data, 1)  # outputs.data是一個4x10張量,將每一行的最大的那一列的值和序號各自組成一個一維張量返回,第一個是值的張量,第二個是序號的張量。
93     total += labels.size(0)
94     correct += (predicted == labels).sum()  # 兩個一維張量逐行對比,相同的行記為1,不同的行記為0,再利用sum(),求總和,得到相同的個數。
95 
96 print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))
View Code

1.由於windows平台的pytorch存在很多問題,例如多線程無法正常工作,所以DataLoader中的num_worker得去掉

2.代碼以cifar-10數據測試集為例,但是訓練的效果並不是很理想,loss函數數據如下,兩次重復訓練后的准確率為56%,10次重復訓練后的准確率為61%,(個人表示原圖片像素太差,至少一半,我都分不清是啥,真是為難了神經網絡了)

[1,  2000] loss: 2.219
[1,  4000] loss: 1.869
[1,  6000] loss: 1.669
[1,  8000] loss: 1.581
[1, 10000] loss: 1.537
[1, 12000] loss: 1.488
[2,  2000] loss: 1.406
[2,  4000] loss: 1.385
[2,  6000] loss: 1.343
[2,  8000] loss: 1.318
[2, 10000] loss: 1.348
[2, 12000] loss: 1.305
[3,  2000] loss: 1.234
[3,  4000] loss: 1.206
[3,  6000] loss: 1.219
[3,  8000] loss: 1.213
[3, 10000] loss: 1.205
[3, 12000] loss: 1.199
[4,  2000] loss: 1.115
[4,  4000] loss: 1.127
[4,  6000] loss: 1.123
[4,  8000] loss: 1.118
[4, 10000] loss: 1.143
[4, 12000] loss: 1.106
[5,  2000] loss: 1.023
[5,  4000] loss: 1.022
[5,  6000] loss: 1.073
[5,  8000] loss: 1.076
[5, 10000] loss: 1.060
[5, 12000] loss: 1.048
[6,  2000] loss: 0.965
[6,  4000] loss: 0.985
[6,  6000] loss: 0.988
[6,  8000] loss: 1.008
[6, 10000] loss: 1.017
[6, 12000] loss: 0.999
[7,  2000] loss: 0.902
[7,  4000] loss: 0.925
[7,  6000] loss: 0.974
[7,  8000] loss: 0.955
[7, 10000] loss: 0.968
[7, 12000] loss: 0.979
[8,  2000] loss: 0.866
[8,  4000] loss: 0.893
[8,  6000] loss: 0.909
[8,  8000] loss: 0.932
[8, 10000] loss: 0.934
[8, 12000] loss: 0.937
[9,  2000] loss: 0.837
[9,  4000] loss: 0.858
[9,  6000] loss: 0.865
[9,  8000] loss: 0.873
[9, 10000] loss: 0.906
[9, 12000] loss: 0.907
[10,  2000] loss: 0.809
[10,  4000] loss: 0.810
[10,  6000] loss: 0.832
[10,  8000] loss: 0.865
[10, 10000] loss: 0.878
[10, 12000] loss: 0.877
View Code

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM