轉載自此大神 http://blog.csdn.net/mao_xiao_feng/article/details/53453926
max pooling是CNN當中的最大值池化操作,其實用法和卷積很類似
有些地方可以從卷積去參考【TensorFlow】tf.nn.conv2d是怎樣實現卷積的?
tf.nn.max_pool(value, ksize, strides, padding, name=None)
參數是四個,和卷積很類似:
第一個參數value:需要池化的輸入,一般池化層接在卷積層后面,所以輸入通常是feature map,依然是[batch, height, width, channels]這樣的shape
第二個參數ksize:池化窗口的大小,取一個四維向量,一般是[1, height, width, 1],因為我們不想在
batch和
channels
上做池化,所以這兩個維度設為了1
第三個參數strides:和卷積類似,窗口在每一個維度上滑動的步長,一般也是[1, stride,
stride
, 1]
第四個參數padding:和卷積類似,可以取'VALID' 或者'SAME'
返回一個Tensor,類型不變,shape仍然是[batch, height, width, channels]
這種形式
示例源碼:
假設有這樣一張圖,雙通道
第一個通道:
第二個通道:
用程序去做最大值池化:
import tensorflow as tf a=tf.constant([ [[1.0,2.0,3.0,4.0], [5.0,6.0,7.0,8.0], [8.0,7.0,6.0,5.0], [4.0,3.0,2.0,1.0]], [[4.0,3.0,2.0,1.0], [8.0,7.0,6.0,5.0], [1.0,2.0,3.0,4.0], [5.0,6.0,7.0,8.0]] ]) a=tf.reshape(a,[1,4,4,2]) pooling=tf.nn.max_pool(a,[1,2,2,1],[1,1,1,1],padding='VALID') with tf.Session() as sess: print("image:") image=sess.run(a) print (image) print("reslut:") result=sess.run(pooling) print (result)
這里步長為1,窗口大小2×2,輸出結果:
image: [[[[ 1. 2.] [ 3. 4.] [ 5. 6.] [ 7. 8.]] [[ 8. 7.] [ 6. 5.] [ 4. 3.] [ 2. 1.]] [[ 4. 3.] [ 2. 1.] [ 8. 7.] [ 6. 5.]] [[ 1. 2.] [ 3. 4.] [ 5. 6.] [ 7. 8.]]]] reslut: [[[[ 8. 7.] [ 6. 6.] [ 7. 8.]] [[ 8. 7.] [ 8. 7.] [ 8. 7.]] [[ 4. 4.] [ 8. 7.] [ 8. 8.]]]]
池化后的圖就是:
證明了程序的結果是正確的。
我們還可以改變步長
pooling=tf.nn.max_pool(a,[1,2,2,1],[1,2,2,1],padding='VALID')
最后的result就變成:
reslut: [[[[ 8. 7.] [ 7. 8.]] [[ 4. 4.] [ 8. 8.]]]]
下面是我自己寫的測試代碼和測試結果:
import tensorflow as tf # def max_pool(value, ksize, strides, padding, data_format="NHWC", name=None) #8x8的4維全1矩陣 value = tf.ones([1, 8, 8, 1], dtype=tf.float32) oplist = [] ksize = [1, 2, 2, 1] strides = [1, 1, 1, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='VALID') oplist.append([reth, 'case 1']) ksize = [1, 4, 4, 1] strides = [1, 1, 1, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='VALID') oplist.append([reth, 'case 2']) ksize = [1, 6, 6, 1] strides = [1, 1, 1, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='VALID') oplist.append([reth, 'case 3']) ksize = [1, 2, 2, 1] strides = [1, 2, 2, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='VALID') oplist.append([reth, 'case 4']) ksize = [1, 2, 2, 1] strides = [1, 2, 2, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='SAME') oplist.append([reth, 'case 5']) with tf.Session() as a_sess: a_sess.run(tf.global_variables_initializer()) for aop in oplist: print("----------{}---------".format(aop[1])) print("shape =",aop[0].shape) print("content=",a_sess.run(aop[0])) print('---------------------\n\n')
結果為
C:\Users\Administrator\Anaconda3\python.exe C:/Users/Administrator/PycharmProjects/p3test/tf_maxpool.py 2017-05-10 16:43:25.690336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.691336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.691336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.692336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.692336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.692336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. ----------case 1--------- shape = (1, 7, 7, 1) content= [[[[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]]]] --------------------- ----------case 2--------- shape = (1, 5, 5, 1) content= [[[[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]]]] --------------------- ----------case 3--------- shape = (1, 3, 3, 1) content= [[[[ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.]]]] --------------------- ----------case 4--------- shape = (1, 4, 4, 1) content= [[[[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]]]] --------------------- ----------case 5--------- shape = (1, 4, 4, 1) content= [[[[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]]]] --------------------- Process finished with exit code 0