tensorflow筆記(五)之MNIST手寫識別系列二
版權聲明:本文為博主原創文章,轉載請指明轉載地址
http://www.cnblogs.com/fydeblog/p/7455233.html
前言
- 這篇博客將用tensorflow實現CNN卷積神經網絡去訓練MNIST數據集,並測試一下MNIST的測試集,算出精確度。
- 由於這一篇博客需要要有一定的基礎,基礎部分請看前面的tensorflow筆記,起碼MNIST手寫識別系列一和CNN初探要看一下,對於已經講過的東西,不會再仔細復述,可能會提一下。還有一件事,我會把jupyter notebook放在這個百度雲鏈接里,方便你下載調試,密碼是5dx9
實踐
首先先導入我們需要的模塊
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data
然后導入MNIST數據集
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
運行后如圖則導入成功:

MNIST數據集的導入不清楚的地方請看here,接下來我們定義兩個函數,分別是生成權重和偏差的函數
def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial)
說明:
- 權重在初始化時應該加入少量的噪聲(偏差stddev=0.1)來打破對稱性以及避免0梯度。由於我們使用的是ReLU神經元,因此比較好的做法是用一個較小的正數來初始化偏置項,以避免神經元節點輸出恆為0的問題(dead neurons)。為了不在建立模型的時候反復做初始化操作,我們定義兩個函數用於初始化。
接下來建立conv2d和max_pool_2X2這兩個函數
def conv2d(x, W): # stride [1, x_movement, y_movement, 1] # Must have strides[0] = strides[3] = 1 return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): # stride [1, x_movement, y_movement, 1] #ksize [1,pool_op_length,pool_op_width,1] # Must have ksize[0] = ksize[3] = 1 return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
說明:
- conv2d函數的輸入參數是要進行卷積的圖片x和卷積核W,函數內部strides是卷積核步長的設定,上面已進行標注,x軸,y軸都是每隔一個像素移動的,步長都為1,padding是填充的意思,這里是SAME,意思是卷積后的圖片與原圖片一樣,有填充。
- max_pool_2X2函數的輸入參數是卷積后的圖片x,ksize是池化算子,由於是2x2max_pool,所以長度和寬度都為2,x軸和y軸的步長都為2,有填充。
接下來我們用占位符定義一些輸入,有圖片集的輸入xs,相應的標簽ys和dropout的概率keep_prob
xs = tf.placeholder(tf.float32, [None, 784]) # 28x28 ys = tf.placeholder(tf.float32, [None, 10]) keep_prob = tf.placeholder(tf.float32)
由於我們要進行卷積,為了符合tf.nn.conv2d和tf.nn.max_pool_2x2的輸入圖片需為4維tensor,我們要對xs做一個reshape,讓它符合要求
x_image = tf.reshape(xs, [-1, 28, 28, 1]) # [n_samples, 28,28,1]
說明:
- x_image是四維張量,分別是[batch, height, width, channels],batch要看上面xs第一維,長和寬為28,通道由於是灰度圖片,所以是1,RGB為3
接下來,我們開始構造卷積神經網絡,先進行第一層的卷積層和第一層的池化層
## conv1 layer ## W_conv1 = weight_variable([5,5, 1,32]) # patch 5x5, in size 1, out size 32 b_conv1 = bias_variable([32]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) # output size 28x28x32 ##pool1 layer## h_pool1 = max_pool_2x2(h_conv1) # output size 14x14x32
說明:
- 卷積核的大小是5x5的,由於輸入size1,輸出32,可見有32個不同的卷積核,然后將W_conv1與x_image送入conv2d函數后加入偏差,最后外圍加上RELU函數,RELU函數是相比其他函數(sigmiod)好很多,使用它,迭代速度會很快,因為它的大於0的導數恆等於1,而sigmiod的導數有可能會很小,趨近於0,我們在進行反向傳播迭代參數更新時,如果這個導數太小,參數的更新就會很慢。
為了得到更高層次的特征,我們需要構建一個更深的網絡,再加第二層卷積層和第二層池化層,原理與上面一樣
## conv2 layer ## W_conv2 = weight_variable([5,5, 32, 64]) # patch 5x5, in size 32, out size 64 b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) # output size 14x14x64 ##pool2_layer## h_pool2 = max_pool_2x2(h_conv2) # output size 7x7x64
好了,特征提取出來了,我們開始用全連通層進行預測,在建立之前,我們需要對h_pool2進行維度處理,因為神經網絡的輸入並不能是4維張量。
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) # [n_samples, 7, 7, 64] ->> [n_samples, 7*7*64]
說明:
- 上面將4維張量,變為2維張量,第一維是樣本數,第二維是輸入特征,可見輸入神經元的個數是7*7*64=3136
全連通層開始,先從7*7*64映射到1024個隱藏層神經元
# fc1 layer ## W_fc1 = weight_variable([7*7*64, 1024]) b_fc1 = bias_variable([1024]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) #dropout h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
說明:
- 這個跟傳統的神經網絡一樣,但和前面見的有點不同,這里最后加了dropout,防止神經網絡過擬合
然后再加一個全連通層,進行1024神經元到10個神經元的映射,最后加一個softmax層,得出每種情況的概率
W_fc2 = weight_variable([1024, 10]) b_fc2 = bias_variable([10]) prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
說明:
- 這個跟上面原理一樣,加了一個softmax,不懂softmax請看往期的筆記或看鏈接中的wiki
然后我們開始算交叉熵和train_step
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),reduction_indices=[1])) # loss train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
說明:
- 不同之前的,這里用到了AdamOptimizer優化器,由於這個計算量很大,用GradientDescentOptimizer優化器下降速度太慢,所以用AdamOptimizer
init = tf.global_variables_initializer() sess = tf.Session() sess.run(init)
上面是套路了,不用多說了,下面再建立一個測量測試集精確度的函數,后面會用到
def compute_accuracy(v_xs, v_ys): global prediction y_pre = sess.run(prediction, feed_dict={xs: v_xs, keep_prob: 1}) correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys, keep_prob: 1}) return result
說明:
- 函數的輸入是測試集的圖片v_xs和相應的標簽v_ys,global prediction讓prediction代表前面的預測值,不這么做下一行會出錯,顯示找不到prediction,測試的時候不加dropout,即keep_prob等於1,后面的跟上一篇筆記一樣。最后返回精確度
好了,所有的工作准備完畢,現在開始訓練和測試,每訓練50次,測試一次,這個時間會有點長,要耐心等待
for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={xs: batch_xs, ys: batch_ys, keep_prob: 0.5}) if i % 50 == 0: print(compute_accuracy(mnist.test.images, mnist.test.labels))
運行結果如下:

感慨:終於運行完了,這段程序大概跑了四十多分鍾,電腦一直處於崩潰狀態,感慨還是有gpu好哦,最后精確度是97.37%,我感覺還能再提高,沒有完全收斂,你們可以再多迭代試試。我是不想在電腦跑這種程序,要跑到gpu或服務器上跑,各位跑程序要有心理准備哈
完整代碼如下(直接運行即可):
1 import tensorflow as tf 2 from tensorflow.examples.tutorials.mnist import input_data 3 # number 1 to 10 data 4 mnist = input_data.read_data_sets('MNIST_data', one_hot=True) 5 6 def compute_accuracy(v_xs, v_ys): 7 global prediction 8 y_pre = sess.run(prediction, feed_dict={xs: v_xs, keep_prob: 1}) 9 correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1)) 10 accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 11 result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys, keep_prob: 1}) 12 return result 13 14 def weight_variable(shape): 15 initial = tf.truncated_normal(shape, stddev=0.1) 16 return tf.Variable(initial) 17 18 def bias_variable(shape): 19 initial = tf.constant(0.1, shape=shape) 20 return tf.Variable(initial) 21 22 def conv2d(x, W): 23 # stride [1, x_movement, y_movement, 1] 24 # Must have strides[0] = strides[3] = 1 25 return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') 26 27 def max_pool_2x2(x): 28 # stride [1, x_movement, y_movement, 1] 29 #ksize [1,pool_op_length,pool_op_width,1] 30 # Must have ksize[0] = ksize[3] = 1 31 return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') 32 33 # define placeholder for inputs to network 34 xs = tf.placeholder(tf.float32, [None, 784]) # 28x28 35 ys = tf.placeholder(tf.float32, [None, 10]) 36 keep_prob = tf.placeholder(tf.float32) 37 x_image = tf.reshape(xs, [-1, 28, 28, 1]) 38 # print(x_image.shape) # [n_samples, 28,28,1] 39 40 ## conv1 layer ## 41 W_conv1 = weight_variable([5,5, 1,32]) # patch 5x5, in size 1, out size 32 42 b_conv1 = bias_variable([32]) 43 h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) # output size 28x28x32 44 h_pool1 = max_pool_2x2(h_conv1) # output size 14x14x32 45 46 ## conv2 layer ## 47 W_conv2 = weight_variable([5,5, 32, 64]) # patch 5x5, in size 32, out size 64 48 b_conv2 = bias_variable([64]) 49 h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) # output size 14x14x64 50 h_pool2 = max_pool_2x2(h_conv2) # output size 7x7x64 51 52 ##flat h_pool2## 53 h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) # [n_samples, 7, 7, 64] ->> [n_samples, 7*7*64] 54 55 ## fc1 layer ## 56 W_fc1 = weight_variable([7*7*64, 1024]) 57 b_fc1 = bias_variable([1024]) 58 h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) 59 h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) 60 61 ## fc2 layer ## 62 W_fc2 = weight_variable([1024, 10]) 63 b_fc2 = bias_variable([10]) 64 prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) 65 66 cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), 67 reduction_indices=[1])) # loss 68 train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) 69 70 init = tf.global_variables_initializer() 71 72 sess = tf.Session() 73 74 sess.run(init) 75 76 for i in range(1000): 77 batch_xs, batch_ys = mnist.train.next_batch(100) 78 sess.run(train_step, feed_dict={xs: batch_xs, ys: batch_ys, keep_prob: 0.5}) 79 if i % 50 == 0: 80 print(compute_accuracy(mnist.test.images, mnist.test.labels))
結尾
MNIST數據集的識別到這里就結束了,希望看過這個博客的朋友們能有所收獲!最后,還是那句話,筆者能力有限,如果有錯誤,還請不吝指教,共同學習!謝謝!
參考
[1] https://www.tensorflow.org/versions/r1.0/api_docs/python/
[2] http://www.tensorfly.cn/tfdoc/tutorials/mnist_pros.html
