pytorch 修改預訓練model


    class Net(nn.Module):
        def __init__(self , model):
            super(Net, self).__init__()
            #取掉model的后兩層
            self.resnet_layer = nn.Sequential(*list(model.children())[:-2])
            self.transion_layer = nn.ConvTranspose2d(2048, 2048, kernel_size=14, stride=3)
            self.pool_layer = nn.MaxPool2d(32)  
            self.Linear_layer = nn.Linear(2048, 8)
            
        def forward(self, x):
            x = self.resnet_layer(x)
            x = self.transion_layer(x)
            x = self.pool_layer(x)
            x = x.view(x.size(0), -1) 
            x = self.Linear_layer(x) 
            return x


    resnet = models.resnet50(pretrained=True)

    model = Net(resnet)

 

訓練特定層,凍結其它層 

The basic idea is that all models have a function model.children() which returns it’s layers. Within each layer, there are parameters (or weights), which can be obtained using .param() on any children (i.e. layer). Now, every parameter has an attribute called requires_grad which is by default True. True means it will be backpropagrated and hence to freeze a layer you need to set requires_grad to False for all parameters of a layer.

import torchvision.models as models
resnet = models.resnet18(pretrained=True)
ct = 0
#This freezes layers 1-6 in the total 10 layers of Resnet18. for child in resnet.children(): ct += 1 if ct< 7: for param in child.parameters(): param.requires_grad = False

  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM