- 0,有時間看源碼還是看看源碼吧,不然永遠是個菜雞。。。雖然看了也還是菜雞。。。
- 0,常用方法總結
- 1,張量擴增(expand, repeat)
- 2,維度擴展(unsqueeze,切片)
- 3,梯度取反(Function)
- 4,求梯度
- 5,CNN與LSTM輸入輸出維度含義
- 6,一維向量的轉換-對角矩陣(diag)、one-hot標簽(torch.nn.functional.one_hot)
- 7,手動修改網絡參數(load_state_dict)
- 8,顯示模型結構的方法
- 9,顯示前k個最大值的索引
- 10,打亂(tensor[torch.randperm(tensor.shape[0])])
- 11,可視化:特征圖(feture map)、卷積核權重、卷積核最匹配樣本、類別激活圖(Class Activation Map/CAM)、網絡結構
- 12,不使用optim進行訓練的步驟
- 13,生成one-hot標簽 nn.funcitonal.one_hot(label, nums_classes=N)
- 14,GPU自動匹配gpu = torch.device(f'cuda:0' if torch.cuda.is_available() else 'cpu')
- 15,矩陣乘法
- 16,pytorch將cpu訓練好的模型參數load到gpu上,或者gpu->cpu上
0,有時間看源碼還是看看源碼吧,不然永遠是個菜雞。。。雖然看了也還是菜雞。。。
0,常用方法總結
'''========================================1,資源配置========================================'''
torch.cuda.is_available()
.to(torch.device('cuda:0'))
.cuda()
'''========================================2,tensor========================================'''
torch.tensor(**, requries_grad=True)
tensor.to(torch.float)
tensor[None, None, :]
torch.randn(2, 10)
.item()
'''========================================3,數據加載========================================'''
torch.utils.data.Dataset/DataLoader
class myDataset(Dataset):
def __init__(self, files):
...
def __len__(self):
return len(...)/...shape[0]
def __getitem__(self,index):
...
return data, label
traingData = myDataset(files)
trainingDataloader = Dataloader(traingData, batch_size=, shuffle=True)
#************************************獲取數據和標簽************************************
data, label = next(iter(trainingDataloader))
for (data, label) in trainingDataloader:
for batch, (data, label) in enumerate(trainingDataloader):
'''========================================4,基本模型庫========================================'''
#************************************模型構造方法1 nn基礎模型************************************
myNet = torch.nn.Linear(10, 10)
myNet.weight
myNet.weight.grad
myNet.bias
myNet.bias.grad
myNet.parameters()
#************************************模型構造方法2.1 nn.Sequential************************************
myNet = torch.nn.Sequential(
nn.Linear(10, 10),
nn.Tanh(),
nn.Linear(10, 10)
)
[param.shape for param in myNet.parameters()]
[(name, param.shape) for (name, param) in myNet.named_parameters()]
#************************************模型構造方法2.2 collections.OrderedDict 與 nn.Sequential 結合,為子模塊命名************************************
from collections import OrderedDict
myNet = nn.Sequential(OrderedDict([
('hidden_linear',nn.Linear(10,10)),
('hidden_activation', nn.Tanh()),
('output_linear', nn.Linear(10,10))
]))
[param.shape for param in myNet.parameters()]
[(name, param.shape) for (name, param) in myNet.named_parameters()]
myNet.hidden_linear.weight
myNet.hidden_linear.weight.grad
myNet.hidden_linear.bias
myNet.hidden_linear.bias.grad
#************************************動態添加子模塊************************************
nn.Sequential().add_module()
#************************************模型構造方法3 nn.Module************************************
torch.nn.Module
class myModule(nn.Module):
def __init__(self):
super().__init__()
def forward(self, inputs):
...
myNet = myModule()
[(name, param.shape) for (name, param) in myNet.named_parameters()]
'''========================================5,優化器========================================'''
#************************************四大類優化器************************************
optimizer = torch.optim.SGD(model.parameters(), lr=learning)
torch.optim.Momentum()
torch.optim.Adagrad()
torch.optim.Adam()
#************************************5.1:針對模型不同層設置不同的學習率,optimizer中傳入的是parameters,篩選用named_parameters中的name************************************
model = torchvision.models.vgg16()
my_list = ['classifier.3.weight', 'classifier.3.bias']
params = [p[1] for p in filter(lambda kv: kv[0] in my_list,
model.named_parameters())]
base_params = [p[1] for p in filter(lambda kv: kv[0] not in my_list,
model.named_parameters())]
optimizer = torch.optim.SGD([{'params': base_params},
{'params': params, 'lr': 1e-4}],
lr=3e-6)
#************************************5.2:自定義根據 epoch 改變學習率************************************
def adjust_learning_rate(optimizer, epoch):
"""Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
lr = lr * (0.1 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
for epoch in range(10):
adjust_learning_rate(optimizer,epoch)
train(...)
validate(...)
#************************************5.3:手動設置學習率衰減區間************************************
def adjust_learning_rate(optimizer, lr):
for param_group in optimizer.param_groups:
param_group['lr'] = lr
for epoch in range(60):
lr = 30e-5
if epoch > 25:
lr = 15e-5
if epoch > 30:
lr = 7.5e-5
if epoch > 35:
lr = 3e-5
if epoch > 40:
lr = 1e-5
adjust_learning_rate(optimizer, lr)
#************************************5.4:變學習率API************************************
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer,
T_max = (epochs // 9) + 1)
for epoch in range(epochs):
scheduler.step(epoch)
'''========================================6,損失函數========================================'''
torch.nn.CrossEntropyLoss()
torch.nn.functional.cross_entropy()
'''========================================7,訓練========================================'''
myNet.train()
loss.backward()
optimizer.step()
optimizer.zero_grad()
scheduler.step()
'''========================================8,保存========================================'''
#************************************方法一************************************
torch.save(optimizer.state_dict(), '')
torch.save(myNet.state_dict(), '')
#************************************方法二************************************
torch.save(optimizer, '')
torch.save(myNet, '')
'''========================================9.加載========================================'''
#保存方法一的對應加載
myNet = myModule()
model_dict = torch.load()
myNet.load_dict(model_dict)
#保存方法二的對應加載
myNet = torch.load()
optimizer = torch.load()
'''========================================10,測試========================================'''
myNet.eval()
'''========================================11,計算准確率========================================'''
'''========================================12,視覺庫========================================'''
torchvision
torchvision.models
torchvision.transforms
'''========================================其他========================================'''
model.parameters()
model.named_parameters()
model.state_dict()
optimizer.param_groups()
optimizer.state_dict()
1,張量擴增(expand, repeat)
expand將tensor作為整體擴充(填入的是放大后的維度數,除非tensor的某個dim的shape=1,此時可做到同緯度擴充,否則只能升維),repeat也是將tensor做為整體擴充(填入的是放大的倍數,使用起來要靈活的多)
>>> a = torch.randn(2, 4)
>>> a
tensor([[-0.1346, 0.3429, -1.3040, -0.6949],
[-0.0433, 1.7080, -1.8213, -1.6689]])
>>> a.expand(2,2,4)
tensor([[[-0.1346, 0.3429, -1.3040, -0.6949],
[-0.0433, 1.7080, -1.8213, -1.6689]],
[[-0.1346, 0.3429, -1.3040, -0.6949],
[-0.0433, 1.7080, -1.8213, -1.6689]]])
>>> a.repeat(1,2)
tensor([[-0.1346, 0.3429, -1.3040, -0.6949, -0.1346, 0.3429, -1.3040, -0.6949],
[-0.0433, 1.7080, -1.8213, -1.6689, -0.0433, 1.7080, -1.8213, -1.6689]])
>>> a.repeat(2,1,1)
tensor([[[-0.1346, 0.3429, -1.3040, -0.6949],
[-0.0433, 1.7080, -1.8213, -1.6689]],
[[-0.1346, 0.3429, -1.3040, -0.6949],
[-0.0433, 1.7080, -1.8213, -1.6689]]])
整體擴充和交替擴充
>>> a = torch.randn(2,2)
>>> a
tensor([[ 0.2356, 0.0189],
[-0.3703, -0.0547]])
>>> a.repeat(2,1)
tensor([[ 0.2356, 0.0189],
[-0.3703, -0.0547],
[ 0.2356, 0.0189],
[-0.3703, -0.0547]])
>>> a.repeat(1,2).reshape(-1, a.shape[1])
tensor([[ 0.2356, 0.0189],
[ 0.2356, 0.0189],
[-0.3703, -0.0547],
[-0.3703, -0.0547]])
注意,torch和numpy中的repeat效果不一致;numpy中有tile而torch中沒有;numpy中的tile和torch中的repeat效果一致
'''numpy'''
>>> a = np.array([1,2,3,4])
>>> np.tile(a, 10)
array([1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2,
3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4])
>>> a.repeat(10)
array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4])
>>> a.expand(10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'numpy.ndarray' object has no attribute 'expand'
'''torch'''
>>> a = torch.arange(5)
>>> a
tensor([0, 1, 2, 3, 4])
>>> a.repeat(10)
tensor([0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3,
4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2,
3, 4])
>>> torch.tile(a, 10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'torch' has no attribute 'tile'
2,維度擴展(unsqueeze,切片)
>>> a = torch.randn(2)
>>> a
tensor([2.0488, 0.5997])
>>> b = a.unsqueeze(-1)
>>> b
tensor([[2.0488],
[0.5997]])
>>> b.shape
torch.Size([2, 1])
>>> b = a[:, None]
>>> b
tensor([[2.0488],
[0.5997]])
>>> b.shape
torch.Size([2, 1])
'''並且切片可以一次擴展多個維度'''
>>> b = a[None, :, None]
>>> b
tensor([[[2.0488],
[0.5997]]])
>>> b.shape
torch.Size([1, 2, 1])
3,梯度取反(Function)
import torch
from torch.autograd import Function
import torch.nn as nn
import torch.optim as optim
from tqdm import tqdm
from matplotlib import pyplot as plt
class ReverseLayer(Function):
@staticmethod
def forward(ctx, x):
return x
@staticmethod
def backward(ctx, grad_output):
return grad_output.neg()
class Net(nn.Module):
def __init__(self):
super().__init__()
self.parameter1 = nn.Parameter(torch.ones(10, 10))
self.parameter2 = nn.Parameter(torch.ones(10, 10))
self.parameter3 = nn.Parameter(torch.ones(10, 10))
def forward(self, x):
return x @ self.parameter1 @ self.parameter2 @ self.parameter3
class ReverseNet(nn.Module):
def __init__(self):
super().__init__()
self.parameter1 = nn.Parameter(torch.ones(10, 10))
self.parameter2 = nn.Parameter(torch.ones(10, 10))
self.parameter3 = nn.Parameter(torch.ones(10, 10))
def forward(self, x):
x1 = x @ self.parameter1
x2 = ReverseLayer.apply(x1 @ self.parameter2)
return x2 @ self.parameter3
dataInput = torch.randn(2, 10)
dataTarget = torch.randn(2, 10)
net1 = Net()
net2 = ReverseNet()
loss1 = torch.mean(net1(dataInput) - dataTarget)
loss1.backward()
loss2 = torch.mean(net2(dataInput) - dataTarget)
loss2.backward()
print('=======================PARAMETER1============================')
print(net1.parameter1.grad[0])
print(net2.parameter1.grad[0])
print('=======================PARAMETER2============================')
print(net1.parameter2.grad[0])
print(net2.parameter2.grad[0])
print('=======================PARAMETER3============================')
print(net1.parameter3.grad[0])
print(net2.parameter3.grad[0])
'''
It can be seen that due to the chain rule,
the derivative of all the layers before the reverse layer is taken to be negative
'''
optim1 = optim.Adam(net1.parameters())
optim2 = optim.Adam(net2.parameters())
loss1List = []
loss2List = []
epoch = 100
for i in tqdm(range(epoch)):
net1.zero_grad()
net2.zero_grad()
loss1 = torch.mean(net1(dataInput) - dataTarget)
loss1List.append(loss1.item())
loss1.backward()
optim1.step()
loss2 = torch.mean(net2(dataInput) - dataTarget)
loss2List.append(loss2.item())
loss2.backward()
optim2.step()
plt.subplot(2, 1, 1)
plt.plot(loss1List)
plt.subplot(2, 1, 2)
plt.plot(loss2List)
plt.show()
'''
It can be seen that
Without reverselayer, loss decreases (min)
With reverselayer, the loss increases (max)
'''
'''========================應用場景:網絡拼接========================'''
'''========================不取反========================'''
import torch
import torch.nn as nn
myNet1 = nn.Linear(10, 10)
myNet2 = nn.Linear(10, 10)
loss = nn.PairwiseDistance(p=2)
optimizer = torch.optim.Adam(myNet1.parameters(), lr=1e-2)
epoch = 500
dataIn = torch.randn(1, 10)
dataOut = torch.ones(1, 10)
print(myNet2(myNet1(dataIn)))
for i in range(epoch):
optimizer.zero_grad()
l = loss(myNet2(myNet1(dataIn)), dataOut)
l.backward()
optimizer.step()
print(myNet2(myNet1(dataIn)))
'''========================應用:取反========================'''
import torch
import torch.nn as nn
from torch.autograd import Function
class ReverseLayerF(Function):
@staticmethod
def forward(ctx, x):
return x
@staticmethod
def backward(ctx, grad_output):
return grad_output.neg()
myNet1 = nn.Linear(10, 10)
myNet2 = nn.Linear(10, 10)
loss = nn.PairwiseDistance(p=2)
optimizer = torch.optim.Adam(myNet1.parameters(), lr=1e-2)
epoch = 500
dataIn = torch.randn(1, 10)
dataOut = torch.ones(1, 10)
print(myNet2(myNet1(dataIn)))
for i in range(epoch):
optimizer.zero_grad()
l = loss(myNet2(ReverseLayerF.apply(myNet1(dataIn))), dataOut)
l.backward()
optimizer.step()
print(myNet2(myNet1(dataIn)))
4,求梯度
'''v1'''
gradients = autograd.grad(outputs=dataOut, inputs=dataIn,
grad_outputs=torch.ones(dataIn.size()).cuda(),
create_graph=True, retain_graph=True, only_inputs=True)[0]
'''v2'''
dataOut.backward(torch.ones_like(dataOut))
5,CNN與LSTM輸入輸出維度含義
-
CNN
卷積data的四個維度: batch, input channel, height, width
Conv2d的四個維度: input channel, output channel, kernel, stride -
LSTM
時間序列data(輸入)的三個維度: sequential-length(近似於NLP中一句話里幾個單詞), batch, input-size(一個單詞幾個字母)
LSTM的三個維度: input-size, output-size, layers
h0的三個維度: layers, batch, output-size
c0的三個維度: layers, batch, output-size
output的三個維度: sequential-length, batch, output-size
6,一維向量的轉換-對角矩陣(diag)、one-hot標簽(torch.nn.functional.one_hot)
轉對角矩陣
diagonalMatrix = torch.diag(tensor)
轉one-hot標簽
torch.nn.functional.one_hot(tensor/Long Tensor, num_classes)
7,手動修改網絡參數(load_state_dict)
- model.state_dict()返回的只是module類內部state dict對象的一個copy,如果只是在此拷貝上進行修改會發現對原先的model並不會有影響
- 解決方案是將model.state_dict() 字典賦值給給一個變量model_dict, 然后在model_dict上修改,最后model.load_state_dict(model_dict)
'''直接修改無變化'''
import torch
import torch.nn as nn
net = nn.Linear(10, 10)
optimizer = torch.optim.Adam(net.parameters(), lr=1e-2)
loss = nn.PairwiseDistance(p=2)
dataIn = torch.randn(2, 10)
dataOut = torch.ones(2, 10)
epoch = 200
for i in range(epoch):
optimizer.zero_grad()
l = loss(net(dataIn), dataOut).mean()
l.backward()
optimizer.step()
print(f'\033[33m{net(dataIn)}\033[0m')
for key in net.state_dict():
print(net.state_dict()[key])
net.state_dict()[key].data = torch.randn(net.state_dict()[key].shape)
for key in net.state_dict():
print(net.state_dict()[key])
print(f'\033[34m{net(dataIn)}\033[0m')
'''先賦值再加載'''
import torch
import torch.nn as nn
net = nn.Linear(10, 10)
optimizer = torch.optim.Adam(net.parameters(), lr=1e-2)
loss = nn.PairwiseDistance(p=2)
dataIn = torch.randn(2, 10)
dataOut = torch.ones(2, 10)
epoch = 200
for i in range(epoch):
optimizer.zero_grad()
l = loss(net(dataIn), dataOut).mean()
l.backward()
optimizer.step()
print(f'\033[33m{net(dataIn)}\033[0m')
model_dict = net.state_dict()
for key in model_dict:
print(model_dict[key])
model_dict[key] = torch.randn(net.state_dict()[key].shape)
net.load_state_dict(model_dict)
for key in net.state_dict():
print(net.state_dict()[key])
print(f'\033[34m{net(dataIn)}\033[0m')
8,顯示模型結構的方法
'''顯示模塊'''
myNet
list(myNet.children())
'''顯示參數名稱及數值'''
mynet.state_dict().keys()
mynet.state_dict()
'''顯示參數數值'''
list(mynet.parameters())
9,顯示前k個最大值的索引
'''k=1'''
tensor.argmax()
'''k>1'''
tansor.argsort()[:k]
10,打亂(tensor[torch.randperm(tensor.shape[0])])
dataRandomIndex = torch.randperm(data.shape[0])
data[dataRandomIndex]
11,可視化:特征圖(feture map)、卷積核權重、卷積核最匹配樣本、類別激活圖(Class Activation Map/CAM)、網絡結構
https://www.cnblogs.com/tensorzhang/p/15053885.html
12,不使用optim進行訓練的步驟
for i in range(epoch):
data = nn.Parameter()
l = loss
l.backward()
data = data - lr*data.grad
data = nn.Parameter(data.detach())
13,生成one-hot標簽 nn.funcitonal.one_hot(label, nums_classes=N)
oneHotLabel = torch.nn.functional.ones_hot(label, num_classes=N).to(torch.float)
'''label從0開始'''
np.eye(num_classes)[arr]
np.eye(7)[np.ones(10, dtype=np.int)]
14,GPU自動匹配gpu = torch.device(f'cuda:0' if torch.cuda.is_available() else 'cpu')
15,矩陣乘法
a * b,要求兩個矩陣維度完全一致,即兩個矩陣對應元素相乘,輸出的維度也和原矩陣維度相同
torch.mul(a, b)是矩陣a和b對應位相乘,a和b的維度必須相等,比如a的維度是(1, 2),b的維度是(1, 2),返回的仍是(1, 2)的矩陣
a@b,要求兩個矩陣維度是(n×m)和(m×p),即普通二維矩陣乘法
torch.mm(a,b),要求兩個矩陣維度是(n×m)和(m×p),即普通二維矩陣乘法
torch.matul(a,b),matmul可以進行張量乘法,輸入可以是高維,當輸入是多維時,把多出的一維作為batch提出來,其他部分做矩陣乘法
當輸入是二維矩陣時,torch.mm(a,b)和torch.matul(a,b)是一樣的
參考博客:https://blog.csdn.net/lijiaming_99/article/details/114642093
16,pytorch將cpu訓練好的模型參數load到gpu上,或者gpu->cpu上
假設我們只保存了模型的參數(model.state_dict())到文件名為modelparameters.pth, model = Net()
1. cpu -> cpu或者gpu -> gpu:
checkpoint = torch.load('modelparameters.pth')
model.load_state_dict(checkpoint)
2. cpu -> gpu 1
torch.load('modelparameters.pth', map_location=lambda storage, loc: storage.cuda(1))
3. gpu 1 -> gpu 0
torch.load('modelparameters.pth', map_location={'cuda:1':'cuda:0'})
4. gpu -> cpu
torch.load('modelparameters.pth', map_location=lambda storage, loc: storage)