版权声明:本文为CSDN博主「七月听雪」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/qq_23262411/article/details/100175943
import torch
import torch.nn as nn
m = nn.BatchNorm1d(2) # With Learnable Parameters
print('m:', m)
n = nn.BatchNorm1d(2, affine=False) # Without Learnable Parameters
print('n:', n)
input = torch.randn(3, 2)
print('input:', input)
output = m(input) # 列归一化
print('output:', input)
结果:
m: BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
n: BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
input: tensor([[-2.2418, -0.1225],
[ 0.1637, -0.1043],
[-0.4440, -0.2567]])
output: tensor([[-2.2418, -0.1225],
[ 0.1637, -0.1043],
[-0.4440, -0.2567]])