深度學習-mnist手寫體識別


 

 

mnist手寫體識別

Mnist數據集可以從官網下載,網址: http://yann.lecun.com/exdb/mnist/ 下載下來的數據集被分成兩部分:55000行的訓練數據集(mnist.train)和10000行的測試數據集(mnist.test)。每一個MNIST數據單元有兩部分組成:一張包含手寫數字的圖片和一個對應的標簽。我們把這些圖片設為“xs”,把這些標簽設為“ys”。訓練數據集和測試數據集都包含xs和ys,比如訓練數據集的圖片是 mnist.train.images ,訓練數據集的標簽是 mnist.train.labels。

我們可以知道圖片是黑白圖片,每一張圖片包含28像素X28像素。我們把這個數組展開成一個向量,長度是 28x28 = 784。因此,在MNIST訓練數據集中,mnist.train.images 是一個形狀為 [60000, 784] 的張量。

 

MNIST中的每個圖像都具有相應的標簽,0到9之間的數字表示圖像中繪制的數字。用的是one-hot編碼

單層(全連接層)實現手寫數字識別

1,定義數據占位符 特征值[None,784] 目標值[None,10]

with tf.variable_scope("data"):
    x = tf.placeholder(tf.float32,[None,784])
    y_true = tf.placeholder(tf.float32,[None,10])

2,建立模型 隨機初始化權重和偏置,w[784,10],b= [10] y_predict = tf.matmul(x,w)+b

with tf.variable_scope("model"):
    w = tf.Variable(tf.random_normal([784,10],mean=0.0,stddev=1.0))
    b = tf.Variable(tf.constant(0.0,shape=[10]))
    y_predict = tf.matmul(x,w)+b

3,計算損失 loss 平均樣本損失

with tf.variable_scope("compute_loss"):
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true,logits=y_predict))

4,梯度下降優化 0.1 步數 2000 從而得出准確率

with tf.variable_scope("optimizer"):
    train_op = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

5,模型評估 argmax() reduce_mean

with tf.variable_scope("acc"):
    eq = tf.equal(tf.argmax(y_true, 1), tf.argmax(y_predict, 1))
    accuracy = tf.reduce_mean(tf.cast(eq,tf.float32))

加載mnist數據集

import tensorflow as tf
# 這里我們利用tensorflow給好的讀取數據的方法
from tensorflow.examples.tutorials.mnist import input_data
def full_connected():
    # 加載mnist數據集
    mnist = input_data.read_data_sets("data/mnist/input_data",one_hot=True)

 

 

運行結果

accuracy: 0.08
accuracy: 0.08
accuracy: 0.1
accuracy: 0.1
accuracy: 0.1
accuracy: 0.1
accuracy: 0.1
accuracy: 0.1
accuracy: 0.14
accuracy: 0.14
accuracy: 0.16
accuracy: 0.16
accuracy: 0.18
accuracy: 0.2
accuracy: 0.2
accuracy: 0.2
accuracy: 0.24
accuracy: 0.24
accuracy: 0.24
accuracy: 0.26
accuracy: 0.26
accuracy: 0.26
accuracy: 0.28
accuracy: 0.28
accuracy: 0.3
accuracy: 0.3
accuracy: 0.32
accuracy: 0.32
accuracy: 0.32
accuracy: 0.36
accuracy: 0.4
accuracy: 0.4
accuracy: 0.4
accuracy: 0.42
accuracy: 0.44
accuracy: 0.44
accuracy: 0.44
accuracy: 0.44
accuracy: 0.44
accuracy: 0.46
accuracy: 0.46
accuracy: 0.46
accuracy: 0.46
accuracy: 0.46
accuracy: 0.48
accuracy: 0.48
accuracy: 0.48
accuracy: 0.48
accuracy: 0.48
accuracy: 0.48
accuracy: 0.52
accuracy: 0.52
accuracy: 0.54
accuracy: 0.54
accuracy: 0.54
accuracy: 0.54
accuracy: 0.56
accuracy: 0.56
accuracy: 0.56
accuracy: 0.58
accuracy: 0.6
accuracy: 0.6
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.62
accuracy: 0.64
accuracy: 0.66
accuracy: 0.66
accuracy: 0.66
accuracy: 0.66
accuracy: 0.66
accuracy: 0.66
accuracy: 0.68
accuracy: 0.7
accuracy: 0.7
accuracy: 0.7
accuracy: 0.7
accuracy: 0.72
accuracy: 0.74
accuracy: 0.76
accuracy: 0.78
accuracy: 0.78
accuracy: 0.8
accuracy: 0.8
accuracy: 0.82
accuracy: 0.82
accuracy: 0.82
accuracy: 0.84
accuracy: 0.84
accuracy: 0.84
accuracy: 0.84
Process finished with exit code 0

 對於使用下面的式子當作損失函數不太理解的:

tf.nn.softmax_cross_entropy_with_logits

請看這篇隨筆:https://www.cnblogs.com/TimVerion/p/11237087.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM