tensorflow 筆記8:RNN、Lstm源碼,訓練代碼輸入輸出,維度分析


tensorflow 官網信息:https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell

tensorflow 版本:1.10

 如有錯誤還望指正,一起探討;

當前層各個參數含義:

Tensorflow 中RNN單個時刻計算流程:

 

Tensorflow 中 lstm 單個時刻計算流程:

 

注:上面計算[H,X] * W后和B維度不同, 如何相加,解釋如下;

  1. tensorflow代碼中,用的這個 nn_ops.bias_add(gate_inputs, self._bias),這個函數的計算方法是,讓每個 batch 的輸出值,都加上這個 B;
  2. 所以維度不同可以相加:【batch_size,Hidden_size】,【Hidden_size】,見函數演示:nn_ops.bias_add

 tensorflow 代碼分析:見如下

tensorflow version:1.9

注:以下是一個batch,一個時刻的計算,若計算所有時刻,則循環執行以下代碼,num_step(句長)次; tensorflow 已經封裝好了,不需要我們寫;

  1 RNN 關鍵代碼:
  2 @tf_export("nn.rnn_cell.BasicRNNCell")
  3 class BasicRNNCell(LayerRNNCell):
  4   """The most basic RNN cell.
  5   Args:
  6     num_units: int, The number of units in the RNN cell.
  7     activation: Nonlinearity to use.  Default: `tanh`.
  8     reuse: (optional) Python boolean describing whether to reuse variables
  9      in an existing scope.  If not `True`, and the existing scope already has
 10      the given variables, an error is raised.
 11     name: String, the name of the layer. Layers with the same name will
 12       share weights, but to avoid mistakes we require reuse=True in such
 13       cases.
 14     dtype: Default dtype of the layer (default of `None` means use the type
 15       of the first input). Required when `build` is called before `call`.
 16   """
 17 
 18   def __init__(self,
 19                num_units,
 20                activation=None,
 21                reuse=None,
 22                name=None,
 23                dtype=None):
 24     super(BasicRNNCell, self).__init__(_reuse=reuse, name=name, dtype=dtype)
 25 
 26     # Inputs must be 2-dimensional.
 27     self.input_spec = base_layer.InputSpec(ndim=2)
 28 
 29     self._num_units = num_units
 30     self._activation = activation or math_ops.tanh
 31 
 32   @property
 33   def state_size(self):
 34     return self._num_units
 35 
 36   @property
 37   def output_size(self):
 38     return self._num_units
 39 
 40   def build(self, inputs_shape):
 41     if inputs_shape[1].value is None:
 42       raise ValueError("Expected inputs.shape[-1] to be known, saw shape: %s"
 43                        % inputs_shape)
 44 
 45     input_depth = inputs_shape[1].value
 46     
 47     # 初始化生成 W 和 B,shape 大小為 
 48     # W: [input_size + Hidden_size, Hidden_size)
 49     # B: [Hidden_size]
 50     self._kernel = self.add_variable(
 51         _WEIGHTS_VARIABLE_NAME,
 52         shape=[input_depth + self._num_units, self._num_units])
 53     self._bias = self.add_variable(
 54         _BIAS_VARIABLE_NAME,
 55         shape=[self._num_units],
 56         initializer=init_ops.zeros_initializer(dtype=self.dtype))
 57 
 58     self.built = True
 59   # 循環該函數 num_step(句子長度) 次,則該層計算完;
 60   def call(self, inputs, state):
 61     """Most basic RNN: output = new_state = act(W * input + U * state + B)."""
 62     # output = Ht = tanh([x,Ht-1]*W + B)
 63     # 如果是第 0 時刻,那么當前的 state(即上一時刻的輸出H0)的值全部為0;
 64     # input 的 shape為: [batch_size,emb_size]
 65     # state 的 shape為:[batch_zize,Hidden_size]
 66     # matmul : 矩陣相乘
 67     # array_ops.concat: 兩個矩陣連接,連接后的 shape 為 [batch_size,input_size + Hidden_size],實際就是[Xt,Ht-1]
 68     
 69     # 此時計算: [input,state] * [W,U] == [Xt,Ht-1] * W,得到的shape為:[batch_size,Hidden_size]
 70     gate_inputs = math_ops.matmul(
 71         array_ops.concat([inputs, state], 1), self._kernel)
 72     # B 的shape 為:【Hidden_size】,[Xt,Ht-1] * W 計算后的shape為:[batch_size,Hidden_size]
 73     # nn_ops.bias_add,這個函數的計算方法是,讓每個 batch 得到的值,都加上這個 B;
 74     # 這一步,加上B后:Ht = tanh([Xt,Ht-1] * W + B),得到的 shape 還是: [batch_size,Hidden_size]
 75     # 那么這個 Ht 將作為下一時刻的輸入和下一層的輸入;
 76     gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)
 77     output = self._activation(gate_inputs)
 78     #此時return的維度為:[batch_size,Hidden_size]
 79     # 一個output作為下一時刻的輸入Ht,另一個作為下一層的輸入 Ht
 80     return output, output
 81 
 82 LSTM 關鍵代碼:
 83 
 84 @tf_export("nn.rnn_cell.BasicLSTMCell")
 85 class BasicLSTMCell(LayerRNNCell):
 86   """Basic LSTM recurrent network cell.
 87   The implementation is based on: http://arxiv.org/abs/1409.2329.
 88   We add forget_bias (default: 1) to the biases of the forget gate in order to
 89   reduce the scale of forgetting in the beginning of the training.
 90   It does not allow cell clipping, a projection layer, and does not
 91   use peep-hole connections: it is the basic baseline.
 92   For advanced models, please use the full @{tf.nn.rnn_cell.LSTMCell}
 93   that follows.
 94   """
 95 
 96   def __init__(self,
 97                num_units,
 98                forget_bias=1.0,
 99                state_is_tuple=True,
100                activation=None,
101                reuse=None,
102                name=None,
103                dtype=None):
104     """Initialize the basic LSTM cell.
105     Args:
106       num_units: int, The number of units in the LSTM cell.
107       forget_bias: float, The bias added to forget gates (see above).
108         Must set to `0.0` manually when restoring from CudnnLSTM-trained
109         checkpoints.
110       state_is_tuple: If True, accepted and returned states are 2-tuples of
111         the `c_state` and `m_state`.  If False, they are concatenated
112         along the column axis.  The latter behavior will soon be deprecated.
113       activation: Activation function of the inner states.  Default: `tanh`.
114       reuse: (optional) Python boolean describing whether to reuse variables
115         in an existing scope.  If not `True`, and the existing scope already has
116         the given variables, an error is raised.
117       name: String, the name of the layer. Layers with the same name will
118         share weights, but to avoid mistakes we require reuse=True in such
119         cases.
120       dtype: Default dtype of the layer (default of `None` means use the type
121         of the first input). Required when `build` is called before `call`.
122       When restoring from CudnnLSTM-trained checkpoints, must use
123       `CudnnCompatibleLSTMCell` instead.
124     """
125     super(BasicLSTMCell, self).__init__(_reuse=reuse, name=name, dtype=dtype)
126     if not state_is_tuple:
127       logging.warn("%s: Using a concatenated state is slower and will soon be "
128                    "deprecated.  Use state_is_tuple=True.", self)
129 
130     # Inputs must be 2-dimensional.
131     self.input_spec = base_layer.InputSpec(ndim=2)
132 
133     self._num_units = num_units
134     self._forget_bias = forget_bias
135     self._state_is_tuple = state_is_tuple
136     self._activation = activation or math_ops.tanh
137 
138   @property
139   def state_size(self):
140     # 隱藏層的 size:
141     return (LSTMStateTuple(self._num_units, self._num_units)
142             if self._state_is_tuple else 2 * self._num_units)
143 
144   @property
145   def output_size(self):
146     # 輸出層的size:Hidden_size
147     return self._num_units
148 
149   def build(self, inputs_shape):
150     if inputs_shape[1].value is None:
151       raise ValueError("Expected inputs.shape[-1] to be known, saw shape: %s"
152                        % inputs_shape)
153    
154     #inputs的維度為:[batch_size,input_size]
155     #如果是第一層每個時刻詞語的輸入,則這個input_size 就是 embedding_size,就等於詞向量的維度;
156     # 所以 此時 input_depth,就是input_size
157     input_depth = inputs_shape[1].value
158     # h_depth 就是 Hidden_size,隱藏層的維度
159     h_depth = self._num_units
160 
161     # self._kernel == W;則此時 W的維度 為【input_size + Hidden_size,4* Hidden_size】
162     # 此處定義四個 W 和 B,是為了,一次就把 i,j,f,o 計算出來;相當於圖中的 ft,it,ct‘,ot
163     self._kernel = self.add_variable(
164         _WEIGHTS_VARIABLE_NAME,
165         shape=[input_depth + h_depth, 4 * self._num_units])
166     # 此時的B的維度為【4 * Hidden_size】
167     self._bias = self.add_variable(
168         _BIAS_VARIABLE_NAME,
169         shape=[4 * self._num_units],
170         initializer=init_ops.zeros_initializer(dtype=self.dtype))
171 
172     self.built = True
173 
174   def call(self, inputs, state):
175     """Long short-term memory cell (LSTM).
176     Args:
177       inputs: `2-D` tensor with shape `[batch_size, input_size]`.
178       state: An `LSTMStateTuple` of state tensors, each shaped
179         `[batch_size, num_units]`, if `state_is_tuple` has been set to
180         `True`.  Otherwise, a `Tensor` shaped
181         `[batch_size, 2 * num_units]`.
182     Returns:
183       A pair containing the new hidden state, and the new state (either a
184         `LSTMStateTuple` or a concatenated state, depending on
185         `state_is_tuple`).
186     """
187     sigmoid = math_ops.sigmoid
188     one = constant_op.constant(1, dtype=dtypes.int32)
189     # Parameters of gates are concatenated into one multiply for efficiency.
190     # 每一層的第0時刻的 c 和 h,元素全部初始化為0;
191     if self._state_is_tuple:
192       c, h = state
193     else:
194       c, h = array_ops.split(value=state, num_or_size_splits=2, axis=one)
195 
196     # 此時刻的 input:Xt 和 上一時刻的輸出:Ht-1,進行結合;
197     # inputs shape : [batch_size,input_size],第一層的時候,input_size,就相當於 embedding_size
198     # 結合后的維度為【batch_size,input_size + Hidden_size】,W的維度為【input_size + Hidden_size,4*hidden_size】
199     # 兩者進行矩陣相乘后的維度為:【batch_size,4*hidden_size】
200     gate_inputs = math_ops.matmul(
201         array_ops.concat([inputs, h], 1), self._kernel)
202     # B 的shape 為:【4 * Hidden_size】,[Xt,Ht-1] * W 計算后的shape為:[batch_size, 4 * Hidden_size]
203     # nn_ops.bias_add,這個函數的計算方法是,讓每個 batch 得到的值,都加上這個 B;
204     # 這一步,加上B后,得到的是,i,j,f,o 的結合, [Xt,Ht-1] * W + B,得到的 shape 還是: [batch_size, 4 * Hidden_size]
205     #  加上偏置B后的維度為:【batch_size,4 * Hidden_size】
206     gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)
207 
208     # i = input_gate, j = new_input, f = forget_gate, o = output_gate
209     # 從以上的矩陣相乘后,分割出來四部分,就是 i,j,f,o的值;
210     # 每個的維度為【batch_size,Hidden_size】
211     i, j, f, o = array_ops.split(
212         value=gate_inputs, num_or_size_splits=4, axis=one)
213 
214     forget_bias_tensor = constant_op.constant(self._forget_bias, dtype=f.dtype)
215 
216     # Note that using `add` and `multiply` instead of `+` and `*` gives a
217     # performance improvement. So using those at the cost of readability.
218     add = math_ops.add
219     # 此處加上遺忘的 bias,選擇遺忘元素;
220     # 以下計算是:對應元素相乘:因為四個參數的維度都是【batch_size,hidden_size】,計算后維度不變;
221     # new_c = c*sigmoid(f+bias) + sigmoid(i)*tanh(o)
222 
223     # 計算后的維度為【batch_size,hidden_size】
224     multiply = math_ops.multiply
225     new_c = add(multiply(c, sigmoid(add(f, forget_bias_tensor))),
226                 multiply(sigmoid(i), self._activation(j)))
227     # 以下計算是:對應元素相乘:因為2個參數的維度都是【batch_size,hidden_size】,計算后維度不變;
228     #new_h = sigmoid(o) * tanh(new_c)
229 
230     new_h = multiply(self._activation(new_c), sigmoid(o))
231 
232     # 計算后的維度是(值不相等):new_c == new_h == 【batch_size,hidden_size】
233 
234 
235     if self._state_is_tuple:
236       new_state = LSTMStateTuple(new_c, new_h)
237     else:
238       new_state = array_ops.concat([new_c, new_h], 1)
239     # new_h:最后一個時刻的H,new_state:最后一個時刻的 H和C;循環執行該函數,執行 num_step次(即 最大的步長),則該層計算完全;
240     # 此時的 new_c 和 new_h,作為下一時刻的輸入,new_h 和下一時刻的,Xt+1 進行連接,連接后的維度為,【batch_size,input_size + Hidden_size】
241     # 如果還有下一層的話,那么此刻的 new_h,變身為下一時刻的 Xt
242     return new_h, new_state

 

 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM