1、tf.concat
tf.concat的作用主要是將向量按指定維連起來,其余維度不變;而1.0版本以后,函數的用法變成:
t1 = [[1, 2, 3], [4, 5, 6]] t2 = [[7, 8, 9], [10, 11, 12]] #按照第0維連接 tf.concat( [t1, t2],0) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] #按照第1維連接 tf.concat([t1, t2],1) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
作為參考合成神經網絡輸出的時候在深度方向(inception_v3)是數字3,[batch,heigh,width,depth]。
2、tf.stack
用法:stack(values, axis=0, name=”stack”):
“”“Stacks a list of rank-R
tensors into one rank-(R+1)
tensor.
x = tf.constant([1, 4]) y = tf.constant([2, 5]) z = tf.constant([3, 6]) tf.stack([x,y,z]) ==> [[1,4],[2,5],[3,6]] tf.stack([x,y,z],axis=0) ==> [[1,4],[2,5],[3,6]] tf.stack([x,y,z],axis=1) ==> [[1, 2, 3], [4, 5, 6]]
tf.stack將一組R維張量變為R+1維張量。注意:tf.pack已經變成了tf.stack\3、tf.squeeze
數據降維,只裁剪等於1的維度
不指定維度則裁剪所有長度為1的維度
import tensorflow as tf arr = tf.Variable(tf.truncated_normal([3,4,1,6,1], stddev=0.1)) sess = tf.Session() sess.run(tf.global_variables_initializer()) sess.run(arr).shape # Out[12]: # (3, 4, 1, 6, 1) sess.run(tf.squeeze(arr,[2,])).shape # Out[17]: # (3, 4, 6, 1) sess.run(tf.squeeze(arr,[2,4])).shape # Out[16]: # (3, 4, 6) sess.run(tf.squeeze(arr)).shape # Out[19]: # (3, 4, 6)
3、tf.split
依照輸入參數二的標量/向量有不同的行為:參數二為標量時,意為沿着axis等分為scalar份;向量時意為安裝元素作為邊界索引切分多份
def split(value, num_or_size_splits, axis=0, num=None, name="split"):
"""Splits a tensor into sub tensors.
If `num_or_size_splits` is an integer type, `num_split`, then splits `value`
along dimension `axis` into `num_split` smaller tensors.
Requires that `num_split` evenly divides `value.shape[axis]`.
If `num_or_size_splits` is not an integer type, it is presumed to be a Tensor
`size_splits`, then splits `value` into `len(size_splits)` pieces. The shape
of the `i`-th piece has the same size as the `value` except along dimension
`axis` where the size is `size_splits[i]`.
For example:
```python
# 'value' is a tensor with shape [5, 30]
# Split 'value' into 3 tensors with sizes [4, 15, 11] along dimension 1
split0, split1, split2 = tf.split(value, [4, 15, 11], 1)
tf.shape(split0) # [5, 4]
tf.shape(split1) # [5, 15]
tf.shape(split2) # [5, 11]
# Split 'value' into 3 tensors along dimension 1
split0, split1, split2 = tf.split(value, num_or_size_splits=3, axis=1)
tf.shape(split0) # [5, 10]
```
4、張量切片
tf.slice
解析:slice(input_, begin, size, name=None):Extracts a slice from a tensor.
假設input為[[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]], [[5, 5, 5], [6, 6, 6]]],如下所示:
(1)tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
(2)tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3], [4, 4, 4]]]
(3)tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]], [[5, 5, 5]]]
tf.strided_slice(record_bytes, [0], [label_bytes]), tf.int32)
在看cifar10的例子的時候,必然會看到一個函數,官方給的文檔注釋長而晦澀,基本等於0.網上也有這個函數,但解釋差勁或者基本沒有解釋,函數的原型是醬紫的.
def strided_slice(input_, begin, end, strides=None, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, var=None, name=None): """Extracts a strided slice from a tensor.
'input'= [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]], [[5, 5, 5], [6, 6, 6]]]
來把輸入變個型,可以看成3維的tensor,從外向為1,2,3維
[[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]], [[5, 5, 5], [6, 6, 6]]]
以tf.strided_slice(input, [0,0,0], [2,2,2], [1,2,1])調用為例,start = [0,0,0] , end = [2,2,2], stride = [1,2,1],求一個[start, end)的一個片段,注意end為開區間
第1維 start = 0 , end = 2, stride = 1, 所以取 0 , 1行,此時的輸出
[[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
第2維時, start = 0 , end = 2 , stride = 2, 所以只能取0行,此時的輸出
[[[1, 1, 1]], [[3, 3, 3]]]
第3維的時候,start = 0, end = 2, stride = 1, 可以取0,1行,此時得到的就是最后的輸出
[[[1, 1]], [[3, 3]]]
整理之后最終的輸出為:
[[[1,1],[3,3]]]
類似代碼如下:
import tensorflow as tf data = [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]], [[5, 5, 5], [6, 6, 6]]] x = tf.strided_slice(data,[0,0,0],[1,1,1]) with tf.Session() as sess: print(sess.run(x))