關於Keras的“層”(Layer)
所有的Keras層對象都有如下方法:
-
layer.get_weights():返回層的權重(numpy array) -
layer.set_weights(weights):從numpy array中將權重加載到該層中,要求numpy array的形狀與*layer.get_weights()的形狀相同 -
layer.get_config():返回當前層配置信息的字典,層也可以借由配置信息重構:
Input(shape=None,batch_shape=None,name=None,dtype=K.floatx(),sparse=False,tensor=None)
Input():用來實例化一個keras張量
keras張量是來自底層后端(Theano或Tensorflow)的張量對象,我們增加了某些屬性,使我們通過知道模型的輸入和輸出來構建keras模型。
添加的keras屬性有:1)._keras_shape:整型的形狀元組通過keras-side 形狀推理傳播 2)._keras_history: 最后一層應用於張量,整個圖層的圖可以從那個層,遞歸地檢索出來。
#參數:
shape: 形狀元組(整型),不包括batch size。for instance, shape=(32,) 表示了預期的輸入將是一批32維的向量。
batch_shape: 形狀元組(整型),包括了batch size。for instance, batch_shape=(10,32)表示了預期的輸入將是10個32維向量的批次。
name: 對於該層是可選的名字字符串。在一個模型中是獨一無二的(同一個名字不能復用2次)。如果name沒有被特指將會自動生成。
dtype: 預期的輸入數據類型
sparse: 特定的布爾值,占位符是否為sparse
tensor: 可選的存在的向量包裝到Input層,如果設置了,該層將不會創建一個占位張量。
#返回
一個張量
#例子
x=Input(shape=(32,))
y=Dense(16,activation='softmax')(x)
model=Model(x,y)
keras里面一些常用的單元:
import keras.layers as KL
二維卷積:
KL.Conv2d()
KL.Activation()
Definition : Activation(self, activation, **kwargs)
舉例: x = KL.Activation('relu')(x)
KL.Add()
舉例:
x = KL.Add()(shortcut,x)
ayer that adds a list of inputs.
It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
Definition : KL.ZeroPadding2D(padding=(1, 1), data_format=None, **kwargs)
class BatchNorm(KL.BatchNormalization): """Extends the Keras BatchNormalization class to allow a central place to make changes if needed. Batch normalization has a negative effect on training if batches are small so this layer is often frozen (via setting in Config class) and functions as linear layer. """ def call(self, inputs, training=None): """ Note about training values: None: Train BN layers. This is the normal mode False: Freeze BN layers. Good when batch size is small True: (don't use). Set layer in training mode even when making inferences """ return super(self.__class__, self).call(inputs, training=training)
代碼:
對super(self.__calss__,self)中的self.__class__
self、 superclass 、 super
self : 當前方法的調用者
class:獲取方法調用者的類對象
superclass:獲取方法調用者的父類對象
class BatchNorm(BatchNormalization):
Extends the Keras BatchNormalization class to allow a central place to make changes if needed.
Batch normalization has a negative effect on training if batches are small so this layer is often frozen (via setting in Config class) and functions as linear layer.
