本節構建一個網絡,將路透社新聞划分為46個互斥的主題,也就是46分類
案例2:新聞分類(多分類問題)
1. 加載數據集
from keras.datasets import reuters (train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
將數據限定在10000個最常見出現的單詞,8982個訓練樣本和2264個測試樣本
len(train_data)
8982
len(test_data)
2246
train_data[10]
2. 將索引解碼為新聞文本
word_index = reuters.get_word_index() reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) # Note that our indices were offset by 3 # because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown". decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
train_labels[10]
3. 編碼數據
import numpy as np def vectorize_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1 return results # 將訓練數據向量化 x_train = vectorize_sequences(train_data) # 將測試數據向量化 x_test = vectorize_sequences(test_data)
# 將標簽向量化,將標簽轉化為one-hot def to_one_hot(labels, dimension=46): results = np.zeros((len(labels), dimension)) for i, label in enumerate(labels): results[i, label] = 1 return results one_hot_train_labels = to_one_hot(train_labels) one_hot_test_labels = to_one_hot(test_labels) from keras.utils.np_utils import to_categorical one_hot_train_labels = to_categorical(train_labels) one_hot_test_labels = to_categorical(test_labels)
4. 模型定義
from keras import models from keras import layers model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(46, activation='softmax'))
5. 編譯模型
對於這個例子,最好的損失函數是categorical_crossentropy(分類交叉熵),它用於衡量兩個概率分布之間的距離
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
6. 留出驗證集
留出1000個樣本作為驗證集
x_val = x_train[:1000] partial_x_train = x_train[1000:] y_val = one_hot_train_labels[:1000] partial_y_train = one_hot_train_labels[1000:]
7. 訓練模型
history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size = 512, validation_data = (x_val, y_val))
8. 繪制訓練損失和驗證損失
import matplotlib.pyplot as plt loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, 'bo', label = 'Training loss') plt.plot(epochs, val_loss, 'b', label = 'Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show()
9. 繪制訓練精度和驗證精度
plt.clf() # 清除圖像 acc = history.history['acc'] val_acc = history.history['val_acc'] plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show()
10. 從頭開始重新訓練一個模型
中間層有64個隱藏神經元
# 從頭開始訓練一個新的模型 model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=9, batch_size = 512, validation_data = (x_val, y_val)) results = model.evaluate(x_test, one_hot_test_labels)
results
[0.981157986054119, 0.790739091745149]
這種方法可以得到79%的精度
import copy test_labels_copy = copy.copy(test_labels) np.random.shuffle(test_labels_copy) float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels)
0.19011576135351738 完全隨機的精度約為19%
# 在新數據上生成預測結果 predictions = model.predict(x_test) predictions[0].shape
np.sum(predictions[0])
np.argmax(predictions[0])
11. 處理標簽和損失的另一種方法
y_train = np.array(train_labels) y_test = np.array(test_labels) model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
12. 中間層維度足夠大的重要性
最終輸出是46維的,本代碼中間層只有4個隱藏單元,中間層的維度遠遠小於46
model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(4, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=20, batch_size = 128, validation_data = (x_val, y_val))
Epoch 20/20 7982/7982 [==============================] - 2s 274us/step - loss: 0.4369 - acc: 0.8779 - val_loss: 1.7934 - val_acc: 0.7160
驗證精度最大約為71%,比前面下降了8%。導致這一下降的主要原因在於,你試圖將大量信息(這些信息足夠回復46個類別的分割超平面)壓縮到維度很小的中間空間
13. 實驗
1. 中間層32個
model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(32, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=20, batch_size = 128, validation_data = (x_val, y_val)) results = model.evaluate(x_test, one_hot_test_labels) results
Out[29]: