版權聲明:本文為CSDN博主「七月聽雪」的原創文章,遵循CC 4.0 BY-SA版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/qq_23262411/article/details/100175943
import torch
import torch.nn as nn
m = nn.BatchNorm1d(2) # With Learnable Parameters
print('m:', m)
n = nn.BatchNorm1d(2, affine=False) # Without Learnable Parameters
print('n:', n)
input = torch.randn(3, 2)
print('input:', input)
output = m(input) # 列歸一化
print('output:', input)
結果:
m: BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
n: BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
input: tensor([[-2.2418, -0.1225],
[ 0.1637, -0.1043],
[-0.4440, -0.2567]])
output: tensor([[-2.2418, -0.1225],
[ 0.1637, -0.1043],
[-0.4440, -0.2567]])