1.對於簡單的定制操作,可以通過使用layers.core.Lambda層來完成。該方法的適用情況:僅對流經該層的數據做個變換,而這個變換本身沒有需要學習的參數.
# 切片后再分別進行embedding和average pooling import numpy as np from keras.models import Sequential from keras.layers import Dense, Activation,Reshape from keras.layers import merge from keras.utils import plot_model from keras.layers import * from keras.models import Model def get_slice(x, index): return x[:, index] keep_num = 3 field_lens = 90 input_field = Input(shape=(keep_num, field_lens)) avg_pools = [] for n in range(keep_num): block = Lambda(get_slice,output_shape=(1,field_lens),arguments={'index':n})(input_field) x_emb = Embedding(input_dim=100, output_dim=200, input_length=field_lens)(block) x_avg = GlobalAveragePooling1D()(x_emb) avg_pools.append(x_avg) output = concatenate([p for p in avg_pools]) model = Model(input_field, output) plot_model(model, to_file='model/lambda.png',show_shapes=True) plt.figure(figsize=(21, 12)) im = plt.imread('model/lambda.png') plt.imshow(im)
這里用Lambda定義了一個對張量進行切片操作的層
2.對於具有可訓練權重的定制層,需要自己來實現。
from keras import backend as K from keras.engine.topology import Layer import numpy as np class MyLayer(Layer): def __init__(self, output_dim, **kwargs): self.output_dim = output_dim super(MyLayer, self).__init__(**kwargs) def build(self, input_shape): # Create a trainable weight variable for this layer. self.kernel = self.add_weight(name='kernel', shape=(input_shape[1], self.output_dim), initializer='uniform', trainable=True) super(MyLayer, self).build(input_shape) # Be sure to call this somewhere! def call(self, x): return K.dot(x, self.kernel) def compute_output_shape(self, input_shape): return (input_shape[0], self.output_dim)
參考:
Writing your own Keras layers Keras官方文檔,中文文檔