原因:這個原因是因為在使用Crossentropyloss作為損失函數時,output=net(input)的output應該是[batchsize, n_class, height, weight],而label則是[batchsize, height, weight],label是單通道灰度圖;BCELoss與CrossEntropyLoss都是用於分類問題。BCELoss是CrossEntropyLoss的一個特例,只用於二分類問題,而CrossEntropyLoss可以用於二分類,也可以用於多分類。
(1)If logit.shape is torch.Size ([4, 31, 256, 256]) and target.shape is [4, 256, 256, 1], where 4 is the batchSize and 31 is the number of categories.
Solution:維度壓縮
loss = criterion(logit, torch.squeeze(target).long())
to change the target to [4, 256, 256].
(2)1only batches of spatial targets supported (non-empty 3D tensors) but got targets of size: : [2, 321, 321, 4]
Solution:使用BCELoss
# criterion = nn.CrossEntropyLoss(weight=self.weight, ignore_index=self.ignore_index, reduction='mean')
criterion = nn.BCELoss(weight=self.weight,reduction='mean')
在BCELoss中,logit.size和target.size()都是[batchsize, n_class, height, weight].