前言:
def __init__(self, n_gmm=2, latent_dim=3): # n_gmm=gmm_k=4 super(DaGMM, self).__init__() # (固定写法) layers = [] layers += [nn.Linear(118, 60)] layers += [nn.Tanh()] # 激活函数 layers += [nn.Linear(60, 30)] layers += [nn.Tanh()] layers += [nn.Linear(30, 10)] layers += [nn.Tanh()] layers += [nn.Linear(10, 1)] self.encoder = nn.Sequential(*layers)
class torch.nn.Linear(in_features, out_features, bias = True)
对传入数据应用线性变换:y = A x + b(是一维函数给我们的理解的)
参数:
in_features:每个输入(x)样本的特征的大小
out_features:每个输出(y)样本的特征的大小
bias:如果设置为False,则图层不会学习附加偏差。默认值是True
import torch x = torch.randn(128, 20) # 输入的维度是(128,20) m = torch.nn.Linear(20, 30) # 20,30是指维度 output = m(x) print('m.weight.shape:\n ', m.weight.shape) print('m.bias.shape:\n', m.bias.shape) print('output.shape:\n', output.shape) # ans = torch.mm(input,torch.t(m.weight))+m.bias 等价于下面的 ans = torch.mm(x, m.weight.t()) + m.bias #torch.mm(a, b)是矩阵a和b矩阵相乘 print('ans.shape:\n', ans.shape) print(torch.equal(ans, output)) 输出: m.weight.shape: torch.Size([30, 20]) m.bias.shape: torch.Size([30]) output.shape: torch.Size([128, 30]) ans.shape: torch.Size([128, 30]) True
为什么m.weight.shape = (30,20)?
答:因为线性变换公式是:
先生成一个(30,20)的weight,实践运算中再转置,这样就能和x做矩阵乘法了。
别老去纠结y = A x + b(是一维函数给我们的理解的)。
参考: