https://blog.csdn.net/u011534057/article/details/51673458
https://blog.csdn.net/qq_34784753/article/details/78668884
https://blog.csdn.net/kangroger/article/details/61414426
https://www.cnblogs.com/lindaxin/p/8027283.html
神經網絡中權值初始化的方法
《Understanding the difficulty of training deep feedforward neural networks》
可惜直到近兩年,這個方法才逐漸得到更多人的應用和認可。
為了使得網絡中信息更好的流動,每一層輸出的方差應該盡量相等。
基於這個目標,現在我們就去推導一下:每一層的權重應該滿足哪種條件。
文章先假設的是線性激活函數,而且滿足0點處導數為1,即 
現在我們先來分析一層卷積:
其中ni表示輸入個數。
根據概率統計知識我們有下面的方差公式: 
特別的,當我們假設輸入和權重都是0均值時(目前有了BN之后,這一點也較容易滿足),上式可以簡化為: 
進一步假設輸入x和權重w獨立同分布,則有: 
於是,為了保證輸入與輸出方差一致,則應該有: 
對於一個多層的網絡,某一層的方差可以用累積的形式表達: 
特別的,反向傳播計算梯度時同樣具有類似的形式: 
綜上,為了保證前向傳播和反向傳播時每一層的方差一致,應滿足:

但是,實際當中輸入與輸出的個數往往不相等,於是為了均衡考量,最終我們的權重方差應滿足:
———————————————————————————————————————
———————————————————————————————————————
學過概率統計的都知道 [a,b] 間的均勻分布的方差為: 
因此,Xavier初始化的實現就是下面的均勻分布:
—————————————————————————————————————————— 
caffe的Xavier實現有三種選擇
(1) 默認情況,方差只考慮輸入個數: 
(2) FillerParameter_VarianceNorm_FAN_OUT,方差只考慮輸出個數: 
(3) FillerParameter_VarianceNorm_AVERAGE,方差同時考慮輸入和輸出個數: 
之所以默認只考慮輸入,我個人覺得是因為前向信息的傳播更重要一些
———————————————————————————————————————————
Tensorflow 調用接口
https://www.tensorflow.org/api_docs/python/tf/glorot_uniform_initializer
tf.glorot_uniform_initializer
Aliases:
tf.glorot_uniform_initializertf.keras.initializers.glorot_uniform
tf.glorot_uniform_initializer(
seed=None,
dtype=tf.float32
)
Defined in tensorflow/python/ops/init_ops.py.
The Glorot uniform initializer, also called Xavier uniform initializer.
It draws samples from a uniform distribution within [-limit, limit] where limit is sqrt(6 / (fan_in + fan_out))where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor.
Reference: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf
Args:
seed: A Python integer. Used to create random seeds. Seetf.set_random_seedfor behavior.dtype: The data type. Only floating point types are supported.
Returns:
An initializer.
Mxnet 調用接口
https://mxnet.apache.org/api/python/optimization/optimization.html#mxnet.initializer.Xavier
class mxnet.initializer.Xavier(rnd_type='uniform', factor_type='avg', magnitude=3)[source]
Returns an initializer performing “Xavier” initialization for weights.
This initializer is designed to keep the scale of gradients roughly the same in all layers.
By default, rnd_type is 'uniform' and factor_type is 'avg', the initializer fills the weights with random numbers in the range of [−c,c][−c,c], where c=3.0.5∗(nin+nout)−−−−−−−−−√c=3.0.5∗(nin+nout). ninnin is the number of neurons feeding into weights, and noutnout is the number of neurons the result is fed to.
If rnd_type is 'uniform' and factor_type is 'in', the c=3.nin−−−√c=3.nin. Similarly when factor_type is 'out', the c=3.nout−−−√c=3.nout.
If rnd_type is 'gaussian' and factor_type is 'avg', the initializer fills the weights with numbers from normal distribution with a standard deviation of 3.0.5∗(nin+nout)−−−−−−−−−√3.0.5∗(nin+nout).
| Parameters: |
|
|---|
