import tensorflow as tf weights = tf.constant([[1.0, -2.0],[-3.0 , 4.0]]) >>> sess.run(tf.contrib.layers.l1_regularizer(0.5)(weights)) 5.0 >>> sess.run(tf.keras.regularizers.l1(0.5)(weights)) 5.0 >>> sess.run(tf.keras.regularizers.l1()(weights)) 0.099999994 >>> sess.run(tf.keras.regularizers.l1(1)(weights)) 10.0 >>> sess.run(tf.nn.l2_loss(weights)) 15.0 >>> sess.run(tf.keras.regularizers.l2(1)(weights)) 30.0 >>> sess.run(tf.keras.regularizers.l2(0.5)(weights)) 15.0 >>> sess.run(tf.contrib.layers.l1_regularizer(0.5)(weights)) 5.0 >>> sess.run(tf.contrib.layers.l2_regularizer(0.5)(weights)) 7.5 >>> sess.run(tf.contrib.layers.l2_regularizer(1.0)(weights)) 15.0
在tensorflow中,tf.nn中只有tf.nn.l2_loss,卻沒有l1_loss,於是自己網上查閱資料,了解到tf.contrib.layers中有tf.contrib.layers.l1_regularizer(),但是tf.contrib目前新版本已經被棄用了,后來發現tf.keras.regularizers下面有l1和l2正則化器,但是該正則化器的l2有點不一樣,從上面的結果可以看出,scale都為1時,它要多一倍。可以查看源代碼,tf.nn.l2_loss和 tf.contrib.layers.l2_regularizer 中都統一除以了2.所以值要少一半。
>>> sess.run(tf.nn.l2_loss(weights)) 15.0 >>> sess.run(tf.keras.regularizers.l2(1)(weights)) 30.0 >>> sess.run(tf.contrib.layers.l2_regularizer(1.0)(weights)) 15.0
將scale設為0.5后,可以得到一樣的值,因此,以后在損失函數中可以使用這樣的方式來求l2損失和l1損失。
>>> sess.run(tf.keras.regularizers.l2(0.5)(weights))
15.0
參考了 day-17 L1和L2正則化的tensorflow示例 - 派森蛙 - 博客園
https://www.cnblogs.com/python-frog/p/9416970.html
''' 輸入: x = [[1.0,2.0]] w = [[1.0,2.0],[3,0,4.0]] 輸出: y = x*w = [[7.0,10.0]] l1 = (1.0+2.0+3.0+4.0)*0.5 = 5.0 l2 = (1.0**2 + 2.0**2 + 3.0**2 + 4.0**2) / 2)*0.5 = 7.5 ''' import tensorflow as tf from tensorflow.contrib.layers import * w = tf.constant([[1.0,2.0],[3.0,4.0]]) x = tf.placeholder(dtype=tf.float32,shape=[None,2]) y = tf.matmul(x,w) with tf.Session() as sess: init = tf.global_variables_initializer() sess.run(init) print(sess.run(y,feed_dict={x:[[1.0,2.0]]})) print("=========================") print(sess.run(l1_regularizer(scale=0.5)(w))) #(1.0+2.0+3.0+4.0)*0.5 = 5.0 print("=========================") print(sess.run(l2_regularizer(scale=0.5)(w))) #(1.0**2 + 2.0**2 + 3.0**2 + 4.0**2) / 2)*0.5 = 7.5