tensorflow和keras搭建DNN、CNN、RNN手寫數字識別


MNIST手寫數字集

  MNIST是一個由美國由美國郵政系統開發的手寫數字識別數據集。手寫內容是0~9,一共有60000個圖片樣本,我們可以到MNIST官網免費下載,總共4個.gz后綴的壓縮文件,該文件是二進制內容。

文件名 大小 用途
train-images-idx3-ubyte.gz 9.45MB 訓練圖像數據
train-labels-idx1-ubyte.gz 0.03MB 訓練圖像的標簽
t10k-images-idx3-ubyte.gz 1.57MB 測試圖像數據
t10k-labels-idx1-ubyte.gz 4.4KB 測試圖像的標簽

下載MNIST數據集

方法一、官網下載(4個gz文件,圖像的取值在0~1之間)

方法二、谷歌下載(1個npz文件,圖像的取值在0~255之間)

方法三、通過tensorflow或keras代碼獲取

from tensorflow.examples.tutorials.mnist import input_data
# tensorflow(1.7版本以前)
# 從MNIST_data/中讀取MNIST數據。當數據不存在時,會自動執行下載
mnist = input_data.read_data_sets("./mnist/", one_hot=True)

# tensorflow(1.7版本以后)
import tensorflow as tf 
(train_x, train_y), (test_x, test_y) = tf.keras.datasets.mnist.load_data(path='mnist.npz')

# keras代碼獲取
from keras.datasets import mnist
(train_x, train_y), (test_x, test_y) = mnist.load_data()

# 通過numpy代碼獲取.npz中的數據
f = np.load(path)
x_train, y_train = f['x_train'], f['y_train']
x_test, y_test = f['x_test'], f['y_test']
f.close()

  如果通過代碼下載MNIST的方法,不FQ的話,可能無法順利下載MNSIT數據集,因此我建議大家還是先手動下載好,再來通過代碼導入。

MNIST圖像

  訓練數據集包含 60,000 個樣本, 測試數據集包含 10,000 樣本。在 MNIST 數據集中的每張圖片由 28 x 28(=784) 個像素點構成, 每個像素點用一個灰度值表示。

  我們可以通過下面python代碼下載MNIST數據集,並窺探一下MNIST數據集的內部數據集的划分,以及手寫數字的長相。

import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
# 從MNIST_data/中讀取MNIST數據。當數據不存在時,會自動執行下載 mnist = input_data.read_data_sets('./mnist', one_hot=True) # 將數組張換成圖片形式 print(mnist.train.images.shape) # 訓練數據圖片(55000, 784) print(mnist.train.labels.shape) # 訓練數據標簽(55000, 10) print(mnist.test.images.shape) # 測試數據圖片(10000, 784) print(mnist.test.labels.shape) # 測試數據圖片(10000, 10) print(mnist.validation.images.shape) # 驗證數據圖片(5000, 784) print(mnist.validation.labels.shape) # 驗證數據圖片(5000, 784) print(mnist.train.labels[1]) # [0. 0. 0. 0. 0. 0. 0. 1. 0. 0.] image = mnist.train.images[1].reshape(28, 28) fig = plt.figure("圖片展示") plt.imshow(image,cmap='gray') plt.axis('off') #不顯示坐標尺寸 plt.show()

   在畫出數字的同時,同時取出標簽.

from tensorflow.examples.tutorials.mnist import input_data
import math
import matplotlib.pyplot as plt
import numpy as np

mnist = input_data.read_data_sets('./mnist', one_hot=True)

# 畫單張mnist數據集的數字
def drawdigit(position,image, title):
    plt.subplot(*position)                      # 星號元組傳參
    plt.imshow(image, cmap='gray_r')
    plt.axis('off')
    plt.title(title)

# 取一個batch的數據,然后在一張畫布上畫batch_size個子圖
def batchDraw(batch_size):
    images, labels = mnist.train.next_batch(batch_size)
    row_num = math.ceil(batch_size ** 0.5)      # 向上取整
    column_num = row_num
    plt.figure(figsize=(row_num, column_num))   # 行.列
    for i in range(row_num):
        for j in range(column_num):
            index = i * column_num + j
            if index < batch_size:
                position = (row_num, column_num, index+1)
                image = images[index].reshape(28, 28)
                # 取出列表中最大數的索引
                title = 'actual:%d' % (np.argmax(labels[index]))
                drawdigit(position, image, title)


if __name__ == '__main__':
    batchDraw(16)
    plt.show()

代碼說明:

mnist = input_data.read_data_sets("./mnist/", one_hot=True, reshape=False)

  圖像是由RGB三個數組組成的,而灰度圖只是其中一個數組,而圖像是由像素組成,每個像素的值在0~225之間,MNIST數據集中的每個數字都有28*28=784個像素值.上面的代碼如果reshape=True(默認),MNIST數據的shape=(?, 784),如果reshape=False MNIST數據為(?, 28,28,1).

Keras

DNN網絡

from keras.models import Model
from keras.layers import Input, Dense, Dropout
from keras import regularizers
from keras.optimizers import Adam

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnist/", one_hot=True)
x_train = mnist.train.images         # 訓練數據 (55000, 784)
y_train = mnist.train.labels         # 訓練標簽
x_test = mnist.test.images
y_test = mnist.test.images

# DNN網絡結構
inputs = Input(shape=(784,))
h1 = Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.01))(inputs)     # 權重矩陣l2正則化
h1 = Dropout(0.2)(h1)
h2 = Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.01))(h1)         # 權重矩陣l2正則化
h2 = Dropout(0.2)(h2)
h3 = Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.01))(h2)         # 權重矩陣l2正則化
h3 = Dropout(0.2)(h3)
outputs = Dense(10, activation='softmax', kernel_regularizer=regularizers.l2(0.01))(h3) # 權重矩陣l2正則化
model = Model(input=inputs, output=outputs)

# 編譯模型
opt = Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=1e-08)        # epsilon模糊因子
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])     # 交叉熵損失函數

# 開始訓練
model.fit(x=x_train, y=y_train, validation_split=0.1, batch_size=128, epochs=4)
model.save('k_DNN.h5')
View Code

CNN網絡

from keras.models import Model
from keras.layers import Input, Conv2D, MaxPooling2D, Reshape, Dense
from keras import regularizers
from keras.optimizers import Adam
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("./mnist/", one_hot=True, reshape=False)

x_train = mnist.train.images         # 訓練數據 (55000, 28, 28, 1)
y_train = mnist.train.labels         # 訓練標簽
x_test = mnist.test.images
y_test = mnist.test.images

# 網絡結構
input = Input(shape=(28, 28, 1))
h1 = Conv2D(filters=64, kernel_size=(3,3), strides=(1, 1), padding='same', activation='relu')(input)
h1 = MaxPooling2D(pool_size=2, strides=2, padding='valid')(h1)

h1 = Conv2D(filters=32, kernel_size=(3,3), strides=(1, 1), padding='same', activation='relu')(h1)
h1 = MaxPooling2D()(h1)

h1 = Conv2D(filters=16, kernel_size=(3,3), strides=(1, 1), padding='same', activation='relu')(h1)
h1 = Reshape((16 * 7 * 7,))(h1)     # h1.shape (?, 16*7*7)

output = Dense(10, activation="softmax", kernel_regularizer=regularizers.l2(0.01))(h1)
model = Model(input=input, output=output)
model.summary()

# 編譯模型
opt = Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model.compile(optimizer=opt, loss="categorical_crossentropy", metrics=["accuracy"])

# 開始訓練
model.fit(x=x_train, y=y_train, validation_split=0.1, epochs=5)

model.save('k_CNN.h5')
View Code

RNN網絡

from keras.models import Model
from keras.layers import Input, LSTM, Dense
from keras import regularizers
from keras.optimizers import Adam

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./mnist/", one_hot=True)
x_train = mnist.train.images            # (28, 28, 1)
x_train = x_train.reshape(-1, 28, 28)
y_train = mnist.train.labels

# RNN網絡結構
inputs = Input(shape=(28, 28))
h1 = LSTM(64, activation='relu', return_sequences=True, dropout=0.2)(inputs)
h2 = LSTM(64, activation='relu', dropout=0.2)(h1)
outputs = Dense(10, activation='softmax', kernel_regularizer=regularizers.l2(0.01))(h2)
model = Model(input=inputs, output=outputs)

# 編譯模型
opt = Adam(lr=0.003, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
# 訓練模型
model.fit(x=x_train, y=y_train, validation_split=0.1, batch_size=128, epochs=5)

model.save('k_RNN.h5')
View Code

 

Tensorflow

DNN網絡

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("./mnist", one_hot=True)
# train image shape: (55000, 784)
# trian label shape: (55000, 10)
# val image shape: (5000, 784)
# test image shape: (10000, 784)
epochs = 2
output_size = 10
input_size = 784
hidden1_size = 512
hidden2_size = 256
batch_size = 1000
learning_rate_base = 0.005
unit_list = [784, 512, 256, 10]
batch_num = mnist.train.labels.shape[0] // batch_size


# 全連接神經網絡
def dense(x, w, b, keeppord):
    linear = tf.matmul(x, w) + b
    activation = tf.nn.relu(linear)
    y = tf.nn.dropout(activation,keeppord)
    return y


def DNNModel(image, w, b, keeppord):
    dense1 = dense(image, w[0], b[0],keeppord)
    dense2 = dense(dense1, w[1], b[1],keeppord)
    output = tf.matmul(dense2, w[2]) + b[2]
    return output


# 生成網絡的權重
def gen_weights(unit_list):
    w = []
    b = []
    # 遍歷層數
    for i in range(len(unit_list)-1):
        sub_w = tf.Variable(tf.random_normal(shape=[unit_list[i], unit_list[i+1]]))
        sub_b = tf.Variable(tf.random_normal(shape=[unit_list[i+1]]))
        w.append(sub_w)
        b.append(sub_b)
    return w, b

x = tf.placeholder(tf.float32, [None, 784])
y_true = tf.placeholder(tf.float32, [None, 10])
keepprob = tf.placeholder(tf.float32)
global_step = tf.Variable(0)

w, b = gen_weights(unit_list)
y_pre = DNNModel(x, w, b, keepprob)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=y_pre, labels=y_true))
tf.summary.scalar("loss", loss)                 # 收集標量
opt = tf.train.AdamOptimizer(0.001).minimize(loss, global_step=global_step)
predict = tf.equal(tf.argmax(y_pre, axis=1), tf.argmax(y_true, axis=1))       # 返回每行或者每列最大值的索引,判斷是否相等
acc = tf.reduce_mean(tf.cast(predict, tf.float32))
tf.summary.scalar("acc", acc)                   # 收集標量
merged = tf.summary.merge_all()                 # 和並變量
saver = tf.train.Saver()                        # 保存和加載模型
init = tf.global_variables_initializer()        # 初始化全局變量
with tf.Session() as sess:
    sess.run(init)
    writer = tf.summary.FileWriter("./logs/tensorboard", tf.get_default_graph())      # tensorboard 事件文件
    for i in range(batch_num * epochs):
        x_train, y_train = mnist.train.next_batch(batch_size)
        summary, _ = sess.run([merged, opt], feed_dict={x:x_train, y_true:y_train, keepprob: 0.75})
        writer.add_summary(summary, i)              # 將每次迭代后的變量寫入事件文件
        # 評估模型在驗證集上的識別率
        if i % 50 == 0:
            feeddict = {x: mnist.validation.images, y_true: mnist.validation.labels, keepprob: 1.}      # 驗證集
            valloss, accuracy = sess.run([loss, acc], feed_dict=feeddict)
            print(i, 'th batch val loss:', valloss, ', accuracy:', accuracy)

    saver.save(sess, './checkpoints/tfdnn.ckpt')        # 保存模型
    print('測試集准確度:', sess.run(acc, feed_dict={x:mnist.test.images, y_true:mnist.test.labels, keepprob:1.}))

writer.close()
View Code

CNN網絡

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

epochs = 10
batch_size = 100
mnist = input_data.read_data_sets("mnist/", one_hot=True, reshape=False)
batch_nums = mnist.train.labels.shape[0] // batch_size

# 卷積結構
def conv2d(x, w, b):
    # x = (?, 28,28,1)
    # filter = [filter_height, filter_width, in_channels, out_channels]
    # data_format = [批次,高度,寬度,通道] # 第一個和第四個必須是1
    return tf.nn.conv2d(x, filter=w, strides=[1, 1, 1, 1], padding='SAME') + b
def pool(x):
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# 定義網絡結構
def cnn_net(x, keepprob):
    # x = reshape=False (?, 28,28,1)
    w1 = tf.Variable(tf.random_normal([5, 5, 1, 64]))
    b1 = tf.Variable(tf.random_normal([64]))
    w2 = tf.Variable(tf.random_normal([5, 5, 64, 32]))
    b2 = tf.Variable(tf.random_normal([32]))
    w3 = tf.Variable(tf.random_normal([7 * 7 * 32, 10]))
    b3 = tf.Variable(tf.random_normal([10]))
    hidden1 = pool(conv2d(x, w1, b1))
    hidden1 = tf.nn.dropout(hidden1, keepprob)
    hidden2 = pool(conv2d(hidden1, w2, b2))
    hidden2 = tf.reshape(hidden2, [-1, 7 * 7 * 32])
    hidden2 = tf.nn.dropout(hidden2, keepprob)
    output = tf.matmul(hidden2, w3) + b3
    return output


# 定義所需占位符
x = tf.placeholder(tf.float32, [None, 28, 28, 1])
y_true = tf.placeholder(tf.float32, [None, 10])
keepprob = tf.placeholder(tf.float32)

# 在訓練模型時,隨着訓練的逐步降低學習率。該函數返回衰減后的學習率。
global_step = tf.Variable(0)
learning_rate = tf.train.exponential_decay(0.01, global_step, 100, 0.96, staircase=True)

# 訓練所需損失函數
logits = cnn_net(x, keepprob)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y_true))
opt = tf.train.AdamOptimizer(learning_rate).minimize(loss, global_step=global_step)

# 定義評估模型
predict = tf.equal(tf.argmax(logits, 1), tf.argmax(y_true, 1))      # 預測值
accuracy = tf.reduce_mean(tf.cast(predict, tf.float32))             # 驗證值

init = tf.global_variables_initializer()
# 開始訓練
with tf.Session() as sess:
    sess.run(init)
    for k in range(epochs):
        for i in range(batch_nums):
            train_x, train_y = mnist.train.next_batch(batch_size)
            sess.run(opt, {x: train_x, y_true: train_y, keepprob: 0.75})
            # 評估模型在驗證集上的識別率
            if i % 50 == 0:
                acc = sess.run(accuracy, {x: mnist.validation.images[:1000], y_true: mnist.validation.labels[:1000], keepprob: 1.})
                print(k, 'epochs, ', i, 'iters, ', ', acc :', acc)
View Code

RNN網絡

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

epochs = 10
batch_size = 1000
mnist = input_data.read_data_sets("mnist/", one_hot=True)
batch_nums = mnist.train.labels.shape[0] // batch_size

# 定義網絡結構
def RNN_Model(x, batch_size, keepprob):
    # rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in [28, 28]]
    rnn_cell = tf.nn.rnn_cell.LSTMCell(28)
    rnn_drop = tf.nn.rnn_cell.DropoutWrapper(rnn_cell, output_keep_prob=keepprob)
    # 創建由多個RNNCell組成的RNN單元。
    multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell([rnn_drop] * 2)
    initial_state = multi_rnn_cell.zero_state(batch_size, tf.float32)
    # 創建由RNNCell指定的遞歸神經網絡cell。執行完全動態展開inputs
    outputs, states = tf.nn.dynamic_rnn(cell=multi_rnn_cell, inputs=x, dtype=tf.float32, initial_state=initial_state )
    # outputs 的shape為[batch_size, max_time, 28]

    w = tf.Variable(tf.random_normal([28, 10]))
    b = tf.Variable(tf.random_normal([10]))
    output = tf.matmul(outputs[:, -1, :], w) + b
    return output, states


# 定義所需占位符
x = tf.placeholder(tf.float32, [None, 28, 28])
y_true = tf.placeholder(tf.float32, [None, 10])
keepprob = tf.placeholder(tf.float32)
global_step = tf.Variable(0)
# 在訓練模型時,隨着訓練的逐步降低學習率。該函數返回衰減后的學習率。
learning_rate = tf.train.exponential_decay(0.01, global_step, 10, 0.96, staircase=True)

# 訓練所需損失函數
y_pred, states = RNN_Model(x, batch_size, keepprob)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_pred, labels=y_true))
opt = tf.train.AdamOptimizer(learning_rate).minimize(loss, global_step=global_step)        # 最小化損失函數
predict = tf.equal(tf.argmax(y_pred, 1), tf.argmax(y_true, 1))        # 預測值
acc = tf.reduce_mean(tf.cast(predict, tf.float32))                # 精度
init = tf.global_variables_initializer()
# 開始訓練
with tf.Session() as sess:
    sess.run(init)
    for k in range(epochs):
        for i in range(batch_nums):
            train_x, train_y = mnist.train.next_batch(batch_size)
            sess.run(opt, {x: train_x.reshape((-1, 28, 28)), y_true: train_y, keepprob: 0.8})
            # 評估模型在驗證集上的識別率
            if i % 50 == 0:
                val_losses = 0
                accuracy = 0
                val_x, val_y = mnist.validation.next_batch(batch_size)
                for i in range(val_x.shape[0]):
                    val_loss, accy = sess.run([loss, acc], {x: val_x.reshape((-1, 28, 28)), y_true: val_y, keepprob: 1.})
                    val_losses += val_loss
                    accuracy += accy
                print('val_loss is :', val_losses / val_x.shape[0], ', accuracy is :', accuracy / val_x.shape[0])
View Code

 

加載模型

  深度學習的訓練是需要很長時間的,我們不可能每次需要預測都花大量的時間去重新訓練,因此我們想出一個方法,保存模型,也就是保存我們訓練好的參數. 

import numpy as np
from keras.models import load_model
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("./mnist/", one_hot=True, reshape=False)  # (?, 28,28,1)
x_test = mnist.test.images            # (10000, 28,28,1)
y_test = mnist.test.labels            # (10000, 10)
print(y_test[1])                    # [0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]

model = load_model('k_CNN.h5')        # 讀取模型

# 評估模型
evl = model.evaluate(x=x_test, y=y_test)
evl_name = model.metrics_names
for i in range(len(evl)):
    print(evl_name[i], ':\t', evl[i])
    # loss :     0.19366768299341203
    # acc :     0.9691

test = x_test[1].reshape(1, 28, 28, 1)
y_predict = model.predict(test)        # (1, 10)
print(y_predict)
# [[1.6e-06 6.0e-09 9.9e-01 5.8e-10 4.0e-07 2.5e-08 1.72e-06 1.2e-09 2.1e-07 8.5e-08]]
y_true = 'actual:%d' % (np.argmax(y_test[1]))        # actual:2
pre = 'actual:%d' % (np.argmax(y_predict))            # actual:2
View Code

 

參考文獻

MNIST數據集探究

Audior的CSDN博客深度學習項目實戰計划——匯總

比較完整且容易入門的MNIST案例


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM