fashion_mnist 計算准確率、召回率、F1值
1、定義
首先需要明確幾個概念:
假設某次預測結果統計為下圖:
那么各個指標的計算方法為:
- A類的准確率:TP1/(TP1+FP5+FP9+FP13+FP17) 即預測為A的結果中,真正為A的比例
- A類的召回率:TP1/(TP1+FP1+FP2+FP3+FP4) 即實際上所有為A的樣例中,能預測出來多少個A(的比例)
- A類的F1值:(准確率*召回率*2)/(准確率+召回率)
實際上我們在訓練出某個模型后,會將測試集中每個測試樣例進行一次結果預測,因此只需統計這些結果,經過計算即可得到各類數據的准確率、召回率、F1值
2、使用fashion_mnist
需要提前pip安裝tensorflow、prettytable、numpy
from tensorflow import keras
import numpy as np
import prettytable
# 下載數據集
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# 制作標簽名稱
class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Boot']
# 圖片數據歸一化
train_images = train_images / 255.0
test_images = test_images / 255.0
# 構建3層DNN模型,使用激活函數softmax
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
# 定義模型的損失函數,優化器與評估指標
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.001),
loss=keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy']
)
# 訓練模型
model.fit(train_images, train_labels, epochs=5)
# 評估模型
predictions = model.predict(test_images)
train_result = np.zeros((10, 10), dtype=int)
for i in range(10000):
train_result[test_labels[i]][np.argmax(predictions[i])] += 1
result_table = prettytable.PrettyTable()
result_table.field_names = ['Type', 'Accu', 'Recall', 'F1']
for i in range(10):
ac = train_result[i][i] / sum(train_result.T[i])
rc = train_result[i][i] / sum(train_result[i])
result_table.add_row([class_names[i], round(ac, 3), round(rc, 3), round(ac * rc * 2 / (ac + rc), 3)])
print(result_table)
實際效果: