【Keras案例學習】 多層感知機做手寫字符分類(mnist_mlp )


from __future__ import print_function
# 導入numpy庫, numpy是一個常用的科學計算庫,優化矩陣的運算
import numpy as np
np.random.seed(1337)

# 導入mnist數據庫, mnist是常用的手寫數字庫
from keras.datasets import mnist
# 導入順序模型
from keras.models import Sequential
# 導入全連接層Dense, 激活層Activation 以及 Dropout層
from keras.layers.core import Dense, Dropout, Activation
# 導入優化器RMSProp
from keras.optimizers import RMSprop
# 導入numpy工具,主要是用to_categorical來轉換類別向量
from keras.utils import np_utils
# 設置batch的大小
batch_size = 128
# 設置類別的個數
nb_classes = 10
# 設置迭代的次數
nb_epoch = 20
# keras中的mnist數據集已經被划分成了60,000個訓練集,10,000個測試集的形式,按以下格式調用即可
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# X_train原本是一個60000*28*28的三維向量,將其轉換為60000*784的二維向量
X_train = X_train.reshape(60000, 784)
# X_test原本是一個10000*28*28的三維向量,將其轉換為10000*784的二維向量
X_test = X_test.reshape(10000, 784)
# 將X_train, X_test的數據格式轉為float32存儲
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# 歸一化
X_train /= 255
X_test /= 255
# 打印出訓練集和測試集的信息
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
60000 train samples
10000 test samples
'''
將類別向量(從0到nb_classes的整數向量)映射為二值類別矩陣,
相當於將向量用one-hot重新編碼'''
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
# 建立順序型模型
model = Sequential()
'''
模型需要知道輸入數據的shape,
因此,Sequential的第一層需要接受一個關於輸入數據shape的參數,
后面的各個層則可以自動推導出中間數據的shape,
因此不需要為每個層都指定這個參數
''' 

# 輸入層有784個神經元
# 第一個隱層有512個神經元,激活函數為ReLu,Dropout比例為0.2
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.2))

# 第二個隱層有512個神經元,激活函數為ReLu,Dropout比例為0.2
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))

# 輸出層有10個神經元,激活函數為SoftMax,得到分類結果
model.add(Dense(10))
model.add(Activation('softmax'))

# 輸出模型的整體信息
# 總共參數數量為784*512+512 + 512*512+512 + 512*10+10 = 669706
model.summary()
____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
dense_4 (Dense)                  (None, 512)           401920      dense_input_2[0][0]              
____________________________________________________________________________________________________
activation_4 (Activation)        (None, 512)           0           dense_4[0][0]                    
____________________________________________________________________________________________________
dropout_3 (Dropout)              (None, 512)           0           activation_4[0][0]               
____________________________________________________________________________________________________
dense_5 (Dense)                  (None, 512)           262656      dropout_3[0][0]                  
____________________________________________________________________________________________________
activation_5 (Activation)        (None, 512)           0           dense_5[0][0]                    
____________________________________________________________________________________________________
dropout_4 (Dropout)              (None, 512)           0           activation_5[0][0]               
____________________________________________________________________________________________________
dense_6 (Dense)                  (None, 10)            5130        dropout_4[0][0]                  
____________________________________________________________________________________________________
activation_6 (Activation)        (None, 10)            0           dense_6[0][0]                    
====================================================================================================
Total params: 669,706
Trainable params: 669,706
Non-trainable params: 0
____________________________________________________________________________________________________
'''
配置模型的學習過程
compile接收三個參數:
1.優化器optimizer:參數可指定為已預定義的優化器名,如rmsprop、adagrad,
或一個Optimizer類對象,如此處的RMSprop()
2.損失函數loss:參數為模型試圖最小化的目標函數,可為預定義的損失函數,
如categorical_crossentropy、mse,也可以為一個損失函數
3.指標列表:對於分類問題,一般將該列表設置為metrics=['accuracy']
'''
model.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])

'''
訓練模型
batch_size:指定梯度下降時每個batch包含的樣本數
nb_epoch:訓練的輪數,nb指number of
verbose:日志顯示,0為不在標准輸出流輸出日志信息,1為輸出進度條記錄,2為epoch輸出一行記錄
validation_data:指定驗證集
fit函數返回一個History的對象,其History.history屬性記錄了損失函數和其他指標的數值隨epoch變化的情況,
如果有驗證集的話,也包含了驗證集的這些指標變化情況
'''
history = model.fit(X_train, Y_train,
                    batch_size = batch_size,
                    nb_epoch = nb_epoch,
                    verbose = 1,
                    validation_data = (X_test, Y_test))

# 按batch計算在某些輸入數據上模型的誤差
score = model.evaluate(X_test, Y_test, verbose=0)
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 3s - loss: 0.2468 - acc: 0.9245 - val_loss: 0.1062 - val_acc: 0.9662
Epoch 2/20
60000/60000 [==============================] - 3s - loss: 0.1027 - acc: 0.9687 - val_loss: 0.0885 - val_acc: 0.9744
Epoch 3/20
60000/60000 [==============================] - 3s - loss: 0.0755 - acc: 0.9772 - val_loss: 0.0798 - val_acc: 0.9763
Epoch 4/20
60000/60000 [==============================] - 3s - loss: 0.0617 - acc: 0.9810 - val_loss: 0.1023 - val_acc: 0.9692
Epoch 5/20
60000/60000 [==============================] - 3s - loss: 0.0512 - acc: 0.9847 - val_loss: 0.0832 - val_acc: 0.9791
Epoch 6/20
60000/60000 [==============================] - 3s - loss: 0.0447 - acc: 0.9866 - val_loss: 0.0778 - val_acc: 0.9796
Epoch 7/20
60000/60000 [==============================] - 3s - loss: 0.0392 - acc: 0.9883 - val_loss: 0.0822 - val_acc: 0.9798
Epoch 8/20
60000/60000 [==============================] - 3s - loss: 0.0336 - acc: 0.9899 - val_loss: 0.0784 - val_acc: 0.9820
Epoch 9/20
60000/60000 [==============================] - 3s - loss: 0.0336 - acc: 0.9904 - val_loss: 0.0937 - val_acc: 0.9809
Epoch 10/20
60000/60000 [==============================] - 3s - loss: 0.0293 - acc: 0.9917 - val_loss: 0.0802 - val_acc: 0.9829
Epoch 11/20
60000/60000 [==============================] - 3s - loss: 0.0260 - acc: 0.9924 - val_loss: 0.0966 - val_acc: 0.9821
Epoch 12/20
60000/60000 [==============================] - 3s - loss: 0.0240 - acc: 0.9932 - val_loss: 0.0984 - val_acc: 0.9836
Epoch 13/20
60000/60000 [==============================] - 3s - loss: 0.0230 - acc: 0.9939 - val_loss: 0.1032 - val_acc: 0.9822
Epoch 14/20
60000/60000 [==============================] - 3s - loss: 0.0236 - acc: 0.9933 - val_loss: 0.1002 - val_acc: 0.9843
Epoch 15/20
60000/60000 [==============================] - 3s - loss: 0.0184 - acc: 0.9945 - val_loss: 0.1111 - val_acc: 0.9811
Epoch 16/20
60000/60000 [==============================] - 3s - loss: 0.0201 - acc: 0.9944 - val_loss: 0.0982 - val_acc: 0.9837
Epoch 17/20
60000/60000 [==============================] - 3s - loss: 0.0186 - acc: 0.9949 - val_loss: 0.1012 - val_acc: 0.9841
Epoch 18/20
60000/60000 [==============================] - 3s - loss: 0.0179 - acc: 0.9951 - val_loss: 0.1132 - val_acc: 0.9824
Epoch 19/20
60000/60000 [==============================] - 3s - loss: 0.0189 - acc: 0.9950 - val_loss: 0.1081 - val_acc: 0.9842
Epoch 20/20
60000/60000 [==============================] - 3s - loss: 0.0168 - acc: 0.9956 - val_loss: 0.1109 - val_acc: 0.9837
# 輸出訓練好的模型在測試集上的表現
print('Test score:', score[0])
print('Test accuracy:', score[1])
Test score: 0.110892460335
Test accuracy: 0.9837


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM