一、loss、acc提取
有時候我們需要查看每個batch訓練時候的損失loss與准確率acc,這樣可以幫助我們挑選合適的epoch以及查看模型是否收斂。
Model.fit()在調用時會返回一個History類,這個類的一個屬性Historty.history是一個字典,里面就包含了每一個batch的測試集與驗證集的loss、acc。
# 模型訓練 history = model.fit(train_images, train_labels, batch_size=50, epochs=5, validation_split=0.1, verbose=1) history.history.keys() # 查看字典的鍵 loss = history.history['loss'] # 測試集損失 acc = history.history['acc'] # 測試集准確率 val_loss = history.history['val_loss'] # 驗證集損失 val_acc = history.history['val_acc'] # 驗證集准確率
二、使用matplotlib可視化
這里可視化用到的包是matplotlib,暫不提供在tensorboard上的可視化,詳細使用如下。
import tensorflow as tf import matplotlib.pyplot as plt # 讀取數據集 (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data() # 數據集歸一化 train_images = train_images / 255 train_labels = train_labels / 255 # 進行數據的歸一化,加快計算的進程 # 創建模型結構 net_input = tf.keras.Input(shape=(28, 28)) fl = tf.keras.layers.Flatten()(net_input) # 調用input l1 = tf.keras.layers.Dense(32, activation="relu")(fl) l2 = tf.keras.layers.Dropout(0.5)(l1) net_output = tf.keras.layers.Dense(10, activation="softmax")(l2) # 創建模型類 model = tf.keras.Model(inputs=net_input, outputs=net_output) # 查看模型的結構 model.summary() # 模型編譯 model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss="sparse_categorical_crossentropy", metrics=['acc']) # 模型訓練 history = model.fit(train_images, train_labels, batch_size=50, epochs=5, validation_split=0.1, verbose=1) history.history.keys() # 查看字典的鍵 loss = history.history['loss'] # 測試集損失 acc = history.history['acc'] # 測試集准確率 val_loss = history.history['val_loss'] # 驗證集損失 val_acc = history.history['val_acc'] # 驗證集准確率 # 可視化,定義2*2的畫布 plt.figure() plt.subplot(221) plt.plot(loss) plt.title('loss') plt.subplot(222) plt.plot(acc) plt.title('acc') plt.subplot(223) plt.plot(val_loss) plt.title('val_loss') plt.subplot(224) plt.plot(val_acc) plt.title('val_acc') plt.show()
輸出結果: