MNIST手寫數據集的識別算得上是深度學習的”hello world“了,所以想要入門必須得掌握。新手入門可以考慮使用Keras框架達到快速實現的目的。
完整代碼如下:
# 1. 導入庫和模塊 from keras.models import Sequential from keras.layers import Conv2D, MaxPool2D from keras.layers import Dense, Flatten from keras.utils import to_categorical # 2. 加載數據 from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() # 3. 數據預處理 img_x, img_y = 28, 28 x_train = x_train.reshape(x_train.shape[0], img_x, img_y, 1) x_test = x_test.reshape(x_test.shape[0], img_x, img_y, 1) #數據標准化 x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 #一位有效編碼 y_train = to_categorical(y_train, 10) y_test = to_categorical(y_test, 10) # 4. 定義模型結構 model = Sequential() model.add(Conv2D(32, kernel_size=(5,5), activation='relu', input_shape=(img_x, img_y, 1))) model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) model.add(Conv2D(64, kernel_size=(5,5), activation='relu')) model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) model.add(Flatten()) model.add(Dense(1000, activation='relu')) model.add(Dense(10, activation='softmax')) # 5. 編譯,聲明損失函數和優化器 model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy']) # 6. 訓練 model.fit(x_train, y_train, batch_size=128, epochs=10) # 7. 評估模型 score = model.evaluate(x_test, y_test) print('acc', score[1])
運行結果如下:
可以看出准確率達到了99%,說明神經網絡在圖像識別上具有巨大的優勢。