方法一:
首先編寫模型結構:
class Model(nn.Module): def __init__(self): super(Model,self).__init__() self.l1=nn.Linear(100,50) self.l2=nn.Linear(50,10) self.l3=nn.Linear(10,1) self.sig=nn.Sigmoid() def forward(self,x): x=self.l1(x) x=self.l2(x) x=self.l3(x) x=self.sig(x) return(x
然后編寫限制權重范圍的類:
class weightConstraint(object): def __init__(self): pass def __call__(self,module): if hasattr(module,'weight'): print("Entered") w=module.weight.data w=w.clamp(0.5,0.7) #將參數范圍限制到0.5-0.7之間 module.weight.data=w
最后實例化這個類,對權重進行限制:
# Applying the constraints to only the last layer constraints=weightConstraint() model=Model()
#for i in .....模型訓練代碼這里請自己補充,
loss = criterion(out, var_y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
model._modules['l3'].apply(constraints)
方法二:
在模型train的時候,對參數的范圍進行限制:
loss = criterion(out, var_y) optimizer.zero_grad() loss.backward() optimizer.step() for p in net.parameters(): p.data.clamp_(0, 99)
將權重和偏執的范圍限制到0-99之間。
僅限制權重的范圍,不限制偏執的范圍:
for p in net.parameters(): p.register_hook(lambda grad: torch.clamp(grad, 0,10))