dropout常常用於抑制過擬合,pytorch也提供了很方便的函數。但是經常不知道dropout的參數p
是什么意思。在TensorFlow中p
叫做keep_prob
,就一直以為pytorch中的p
應該就是保留節點數的比例,但是實驗結果發現反了,實際上表示的是不保留節點數的比例。看下面的例子:
a = torch.randn(10,1)
>>> tensor([[ 0.0684],
[-0.2395],
[ 0.0785],
[-0.3815],
[-0.6080],
[-0.1690],
[ 1.0285],
[ 1.1213],
[ 0.5261],
[ 1.1664]])
- p=0.5
torch.nn.Dropout(0.5)(a)
>>> tensor([[ 0.0000],
[-0.0000],
[ 0.0000],
[-0.7631],
[-0.0000],
[-0.0000],
[ 0.0000],
[ 0.0000],
[ 1.0521],
[ 2.3328]])
- p=0
torch.nn.Dropout(0)(a)
>>> tensor([[ 0.0684],
[-0.2395],
[ 0.0785],
[-0.3815],
[-0.6080],
[-0.1690],
[ 1.0285],
[ 1.1213],
[ 0.5261],
[ 1.1664]])
- p=1
torch.nn.Dropout(0)(a)
>>> tensor([[0.],
[-0.],
[0.],
[-0.],
[-0.],
[-0.],
[0.],
[0.],
[0.],
[0.]])