數據類型轉換Casting
操作 | 描述 |
---|---|
tf.string_to_number (string_tensor, out_type=None, name=None) |
字符串轉為數字 |
tf.to_double(x, name=’ToDouble’) | 轉為64位浮點類型–float64 |
tf.to_float(x, name=’ToFloat’) | 轉為32位浮點類型–float32 |
tf.to_int32(x, name=’ToInt32’) | 轉為32位整型–int32 |
tf.to_int64(x, name=’ToInt64’) | 轉為64位整型–int64 |
tf.cast(x, dtype, name=None) | 將x或者x.values轉換為dtype # tensor a is [1.8, 2.2], dtype=tf.floattf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32 |
形狀操作Shapes and Shaping
操作 | 描述 |
---|---|
tf.shape(input, name=None) | 返回數據的shape # ‘t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] shape(t) ==> [2, 2, 3] |
tf.size(input, name=None) | 返回數據的元素數量 # ‘t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]] size(t) ==> 12 |
tf.rank(input, name=None) | 返回tensor的rank 注意:此rank不同於矩陣的rank, tensor的rank表示一個tensor需要的索引數目來唯一表示任何一個元素 也就是通常所說的 “order”, “degree”或”ndims” #’t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] # shape of tensor ‘t’ is [2, 2, 3] rank(t) ==> 3 |
tf.reshape(tensor, shape, name=None) | 改變tensor的形狀 # tensor ‘t’ is [1, 2, 3, 4, 5, 6, 7, 8, 9] # tensor ‘t’ has shape [9] reshape(t, [3, 3]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9]] #如果shape有元素[-1],表示在該維度打平至一維 # -1 將自動推導得為 9: reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]] |
tf.expand_dims(input, dim, name=None) | 插入維度1進入一個tensor中 #該操作要求-1-input.dims() # ‘t’ is a tensor of shape [2] shape(expand_dims(t, 0)) ==> [1, 2] shape(expand_dims(t, 1)) ==> [2, 1] shape(expand_dims(t, -1)) ==> [2, 1] <= dim <= input.dims() |
切片與合並(Slicing and Joining)
操作 | 描述 |
---|---|
tf.slice(input_, begin, size, name=None) | 對tensor進行切片操作 其中size[i] = input.dim_size(i) - begin[i] 該操作要求 0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n] #’input’ is #[[[1, 1, 1], [2, 2, 2]],[[3, 3, 3], [4, 4, 4]],[[5, 5, 5], [6, 6, 6]]] tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]] tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3], [4, 4, 4]]] tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]], [[5, 5, 5]]] |
tf.split(split_dim, num_split, value, name=’split’) | 沿着某一維度將tensor分離為num_split tensors # ‘value’ is a tensor with shape [5, 30] # Split ‘value’ into 3 tensors along dimension 1 split0, split1, split2 = tf.split(1, 3, value) tf.shape(split0) ==> [5, 10] |
tf.concat(concat_dim, values, name=’concat’) | 沿着某一維度連結tensor t1 = [[1, 2, 3], [4, 5, 6]] t2 = [[7, 8, 9], [10, 11, 12]] tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]] 如果想沿着tensor一新軸連結打包,那么可以: tf.concat(axis, [tf.expand_dims(t, axis) for t in tensors]) 等同於tf.pack(tensors, axis=axis) |
tf.pack(values, axis=0, name=’pack’) | 將一系列rank-R的tensor打包為一個rank-(R+1)的tensor # ‘x’ is [1, 4], ‘y’ is [2, 5], ‘z’ is [3, 6] pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # 沿着第一維pack pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]] 等價於tf.pack([x, y, z]) = np.asarray([x, y, z]) |
tf.reverse(tensor, dims, name=None) | 沿着某維度進行序列反轉 其中dim為列表,元素為bool型,size等於rank(tensor) # tensor ‘t’ is [[[[ 0, 1, 2, 3], #[ 4, 5, 6, 7], #[ 8, 9, 10, 11]], #[[12, 13, 14, 15], #[16, 17, 18, 19], #[20, 21, 22, 23]]]] # tensor ‘t’ shape is [1, 2, 3, 4] # ‘dims’ is [False, False, False, True] reverse(t, dims) ==> [[[[ 3, 2, 1, 0], [ 7, 6, 5, 4], [ 11, 10, 9, 8]], [[15, 14, 13, 12], [19, 18, 17, 16], [23, 22, 21, 20]]]] |
tf.transpose(a, perm=None, name=’transpose’) | 調換tensor的維度順序 按照列表perm的維度排列調換tensor順序, 如為定義,則perm為(n-1…0) # ‘x’ is [[1 2 3],[4 5 6]] tf.transpose(x) ==> [[1 4], [2 5],[3 6]] # Equivalently tf.transpose(x, perm=[1, 0]) ==> [[1 4],[2 5], [3 6]] |
tf.gather(params, indices, validate_indices=None, name=None) | 合並索引indices所指示params中的切片![]() |
tf.one_hot (indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None) |
indices = [0, 2, -1, 1] depth = 3 on_value = 5.0 off_value = 0.0 axis = -1 #Then output is [4 x 3]: output = [5.0 0.0 0.0] // one_hot(0) [0.0 0.0 5.0] // one_hot(2) [0.0 0.0 0.0] // one_hot(-1) [0.0 5.0 0.0] // one_hot(1) |
分割(Segmentation)
操作 | 描述 |
---|---|
tf.segment_sum(data, segment_ids, name=None) | 根據segment_ids的分段計算各個片段的和 其中segment_ids為一個size與data第一維相同的tensor 其中id為int型數據,最大id不大於size c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) tf.segment_sum(c, tf.constant([0, 0, 1])) ==>[[0 0 0 0] [5 6 7 8]] 上面例子分為[0,1]兩id,對相同id的data相應數據進行求和, 並放入結果的相應id中, 且segment_ids只升不降 |
tf.segment_prod(data, segment_ids, name=None) | 根據segment_ids的分段計算各個片段的積 |
tf.segment_min(data, segment_ids, name=None) | 根據segment_ids的分段計算各個片段的最小值 |
tf.segment_max(data, segment_ids, name=None) | 根據segment_ids的分段計算各個片段的最大值 |
tf.segment_mean(data, segment_ids, name=None) | 根據segment_ids的分段計算各個片段的平均值 |
tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None) |
與tf.segment_sum函數類似, 不同在於segment_ids中id順序可以是無序的 |
tf.sparse_segment_sum(data, indices, segment_ids, name=None) |
輸入進行稀疏分割求和 c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) # Select two rows, one segment. tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) ==> [[0 0 0 0]] 對原data的indices為[0,1]位置的進行分割, 並按照segment_ids的分組進行求和 |