torch.nn.BCELoss用法


1. 定義
  數學公式為 Loss = -w * [p * log(q) + (1-p) * log(1-q)] ,其中p、q分別為理論標簽、實際預測值,w為權重。這里的log對應數學上的ln。

  PyTorch對應函數為:
    torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction=‘mean’)
  計算目標值和預測值之間的二進制交叉熵損失函數。

  有四個可選參數:weight、size_average、reduce、reduction

  1. weight必須和target的shape一致,默認為none。定義BCELoss的時候指定即可。
  2. 默認情況下 nn.BCELoss(),reduce = True,size_average = True。
  3. 如果reduce為False,size_average不起作用,返回向量形式的loss。
  4. 如果reduce為True,size_average為True,返回loss的均值,即loss.mean()。
  5. 如果reduce為True,size_average為False,返回loss的和,即loss.sum()。
  6. 如果reduction = ‘none’,直接返回向量形式的 loss。
  7. 如果reduction = ‘sum’,返回loss之和。
  8. 如果reduction = ''elementwise_mean,返回loss的平均值。
  9. 如果reduction = ''mean,返回loss的平均值

2. 驗證代碼

1>

import torch import torch.nn as nn m = nn.Sigmoid() loss = nn.BCELoss(size_average=False, reduce=False) input = torch.randn(3, requires_grad=True) target = torch.empty(3).random_(2) lossinput = m(input) output = loss(lossinput, target) print("輸入值:") print(lossinput) print("輸出的目標值:") print(target) print("計算loss的結果:") print(output)

 

 

 2>

import torch import torch.nn as nn m = nn.Sigmoid() loss = nn.BCELoss(size_average=True, reduce=False) input = torch.randn(3, requires_grad=True) target = torch.empty(3).random_(2) lossinput = m(input) output = loss(lossinput, target) print("輸入值:") print(lossinput) print("輸出的目標值:") print(target) print("計算loss的結果:") print(output)

 

 

 3>

import torch import torch.nn as nn m = nn.Sigmoid() loss = nn.BCELoss(size_average=True, reduce=True) input = torch.randn(3, requires_grad=True) target = torch.empty(3).random_(2) lossinput = m(input) output = loss(lossinput, target) print("輸入值:") print(lossinput) print("輸出的目標值:") print(target) print("計算loss的結果:") print(output)

 

 

 4>

import torch import torch.nn as nn m = nn.Sigmoid() loss = nn.BCELoss(size_average=False, reduce=True) input = torch.randn(3, requires_grad=True) target = torch.empty(3).random_(2) lossinput = m(input) output = loss(lossinput, target) print("輸入值:") print(lossinput) print("輸出的目標值:") print(target) print("計算loss的結果:") print(output)

 

 

 5>

import torch import torch.nn as nn m = nn.Sigmoid() loss = nn.BCELoss(reduction = 'none') input = torch.randn(3, requires_grad=True) target = torch.empty(3).random_(2) lossinput = m(input) output = loss(lossinput, target) print("輸入值:") print(lossinput) print("輸出的目標值:") print(target) print("計算loss的結果:") print(output)

 

 6>

import torch import torch.nn as nn m = nn.Sigmoid() weights=torch.randn(3) loss = nn.BCELoss(weight=weights,size_average=False, reduce=False) input = torch.randn(3, requires_grad=True) target = torch.empty(3).random_(2) lossinput = m(input) output = loss(lossinput, target) print("輸入值:") print(lossinput) print("輸出的目標值:") print(target) print("權重值") print(weights) print("計算loss的結果:") print(output)

 

 

 

 

2. 驗證代碼
1>
import torchimport torch.nn as nn
m = nn.Sigmoid()
loss = nn.BCELoss(size_average=False, reduce=False)input = torch.randn(3, requires_grad=True)target = torch.empty(3).random_(2)lossinput = m(input)output = loss(lossinput, target)
print("輸入值:")print(lossinput)print("輸出的目標值:")print(target)print("計算loss的結果:")print(output)1234567891011121314151617
2>
import torchimport torch.nn as nn
m = nn.Sigmoid()
loss = nn.BCELoss(size_average=True, reduce=False)input = torch.randn(3, requires_grad=True)target = torch.empty(3).random_(2)lossinput = m(input)output = loss(lossinput, target)
print("輸入值:")print(lossinput)print("輸出的目標值:")print(target)print("計算loss的結果:")print(output)1234567891011121314151617
3>
import torchimport torch.nn as nn
m = nn.Sigmoid()
loss = nn.BCELoss(size_average=True, reduce=True)input = torch.randn(3, requires_grad=True)target = torch.empty(3).random_(2)lossinput = m(input)output = loss(lossinput, target)
print("輸入值:")print(lossinput)print("輸出的目標值:")print(target)print("計算loss的結果:")print(output)1234567891011121314151617
4>
import torchimport torch.nn as nn
m = nn.Sigmoid()
loss = nn.BCELoss(size_average=False, reduce=True)input = torch.randn(3, requires_grad=True)target = torch.empty(3).random_(2)lossinput = m(input)output = loss(lossinput, target)
print("輸入值:")print(lossinput)print("輸出的目標值:")print(target)print("計算loss的結果:")print(output)1234567891011121314151617
5>
import torchimport torch.nn as nn
m = nn.Sigmoid()
loss = nn.BCELoss(reduction = 'none')input = torch.randn(3, requires_grad=True)target = torch.empty(3).random_(2)lossinput = m(input)output = loss(lossinput, target)
print("輸入值:")print(lossinput)print("輸出的目標值:")print(target)print("計算loss的結果:")print(output)1234567891011121314151617
6>
import torchimport torch.nn as nn
m = nn.Sigmoid()weights=torch.randn(3)
loss = nn.BCELoss(weight=weights,size_average=False, reduce=False)input = torch.randn(3, requires_grad=True)target = torch.empty(3).random_(2)lossinput = m(input)output = loss(lossinput, target)
print("輸入值:")print(lossinput)print("輸出的目標值:")print(target)print("權重值")print(weights)print("計算loss的結果:")print(output)1234567891011121314151617181920
————————————————版權聲明:本文為CSDN博主「qq_29631521」的原創文章,遵循CC 4.0 BY-SA版權協議,轉載請附上原文出處鏈接及本聲明。原文鏈接:https://blog.csdn.net/qq_29631521/article/details/104907401


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM