損失函數——均方誤差和交叉熵


1.MSE(均方誤差)

MSE是指真實值與預測值(估計值)差平方的期望,計算公式如下:

MSE = 1/m (Σ(ym-y'm)2),所得結果越大,表明預測效果越差,即y和y'相差越大

y = tf.constant([1,2,3,0,2])
y = tf.one_hot(y,depth=4)
y = tf.cast(y,dtype=tf.float32)

out = tf.random.normal([5,4])
# MSE標准定義方式
loss1 = tf.reduce_mean(tf.square(y-out))
# L2-norm的標准定義方式
loss2 = tf.square(tf.norm(y-out))/(5*4)
# 直接調用losses中的MSE函數
loss3 = tf.reduce_mean(tf.losses.MSE(y,out))

print(loss1)
print(loss2)
print(loss3)

 

 

2.Cross Entropy Loss(交叉熵)

在理解交叉熵之前,首先來認識一下熵,計算公式如下:

Entropy = -ΣP(i)logP(i),越小的交叉熵對應越大的信息量,即模型越不穩定

a = tf.fill([4],0.25)
a = a*tf.math.log(a)/tf.math.log(2.)
print(a)
CEL = -tf.reduce_sum(a*tf.math.log(a)/tf.math.log(2.))
print(CEL)

a = tf.constant([0.1,0.1,0.1,0.7])
CEL = -tf.reduce_sum(a*tf.math.log(a)/tf.math.log(2.))
print(CEL)

a = tf.constant([0.01,0.01,0.01,0.97])
CEL = -tf.reduce_sum(a*tf.math.log(a)/tf.math.log(2.))
print(CEL)

 

 

交叉熵主要用於度量兩個概率分布間的差異性信息,計算公式如下:

H(p,q) = -Σp(x)logq(x)

也可以寫成如下式子:

H(p,q) = H(p) + DKL(p|q) ,其中DKL(p|q)代表p和q之間的距離

當p=q時,H(p,q) = H(p)

當p編碼為one-hot時,h(p:[0,1,0]) = -1log1 = 0,H([0,1,0],[p0,p1,p2])=0+DKL(p|q)=-1logq1

loss1 = tf.losses.categorical_crossentropy([0,1,0,0],[0.25,0.25,0.25,0.25])
loss2 = tf.losses.categorical_crossentropy([0,1,0,0],[0.1,0.1,0.7,0.1])
loss3 = tf.losses.categorical_crossentropy([0,1,0,0],[0.01,0.97,0.01,0.01])
print(loss1)
print(loss2)
print(loss3)

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM