keras 保存訓練的最佳模型


轉自:https://anifacc.github.io/deeplearning/machinelearning/python/2017/08/30/dlwp-ch14-keep-best-model-checkpoint/,感謝分享

深度學習模型花費時間大多很長, 如果一次訓練過程意外中斷, 那么后續時間再跑就浪費很多時間. 這一次練習中, 我們利用 Keras checkpoint 深度學習模型在訓練過程模型, 我的理解是檢查訓練過程, 將好的模型保存下來. 如果訓練過程意外中斷, 那么我們可以加載最近一次的文件, 繼續進行訓練, 這樣以前運行過的就可以忽略.

那么如何 checkpoint 呢, 通過練習來了解.

  • 數據: Pima diabete 數據
  • 神經網絡拓撲結構: 8-12-8-1

1.效果提升檢查

如果神經網絡在訓練過程中, 其訓練效果有所提升, 則將該次模型訓練參數保存下來.

代碼:

# -*- coding: utf-8 -*- # Checkpoint NN model imporvements from keras.models import Sequential from keras.layers import Dense from keras.callbacks import ModelCheckpoint import numpy as np import urllib url = "http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data" raw_data = urllib.urlopen(url) dataset = np.loadtxt(raw_data, delimiter=",") X = dataset[:, 0:8] y = dataset[:, 8] seed = 42 np.random.seed(seed) # create model model = Sequential() model.add(Dense(12, input_dim=8, init='uniform', activation='relu')) model.add(Dense(8, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) # compile model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # checkpoint filepath = "weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5" # 中途訓練效果提升, 則將文件保存, 每提升一次, 保存一次 checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks_list = [checkpoint] # Fit model.fit(X, y, validation_split=0.33, nb_epoch=150, batch_size=10, callbacks=callbacks_list, verbose=0) 

部分結果:

Epoch 00139: val_acc did not improve
Epoch 00140: val_acc improved from 0.70472 to 0.71654, saving model to weights-improvement-140-0.72.hdf5
Epoch 00141: val_acc did not improve
Epoch 00142: val_acc did not improve
Epoch 00143: val_acc did not improve
Epoch 00144: val_acc did not improve
Epoch 00145: val_acc did not improve
Epoch 00146: val_acc did not improve
Epoch 00147: val_acc did not improve
Epoch 00148: val_acc did not improve
Epoch 00149: val_acc did not improve

在運行程序的本地文件夾下, 我們會發現許多性能提升時, 程序自動保存的 hdf5 文件.


2.檢查最好模型

檢查訓練過程中訓練效果最好的那個模型.

代碼:

# -*- coding: utf-8 -*- # # checkpoint the weights for the best model on validation accuracy from keras.models import Sequential from keras.layers import Dense from keras.callbacks import ModelCheckpoint import numpy as np import urllib url = "http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data" raw_data = urllib.urlopen(url) dataset = np.loadtxt(raw_data, delimiter=",") X = dataset[:, 0:8] y = dataset[:, 8] seed = 42 np.random.seed(seed) # create model model = Sequential() model.add(Dense(12, input_dim=8, init='uniform', activation='relu')) model.add(Dense(8, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) # compile model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # checkpoint filepath='weights.best.hdf5' # 有一次提升, 則覆蓋一次. checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks_list = [checkpoint] # fit model.fit(X, y, validation_split=0.33, nb_epoch=150, batch_size=10, callbacks=callbacks_list, verbose=0) 

部分結果:

df5
Epoch 00044: val_acc did not improve
Epoch 00045: val_acc improved from 0.69685 to 0.69685, saving model to weights.best.hdf5
Epoch 00046: val_acc did not improve
Epoch 00047: val_acc did not improve
Epoch 00048: val_acc did not improve
Epoch 00049: val_acc improved from 0.69685 to 0.70472, saving model to weights.best.hdf5
...
Epoch 00140: val_acc improved from 0.70472 to 0.71654, saving model to weights.best.hdf5
Epoch 00141: val_acc did not improve
Epoch 00142: val_acc did not improve
Epoch 00143: val_acc did not improve
Epoch 00144: val_acc did not improve
Epoch 00145: val_acc did not improve
Epoch 00146: val_acc did not improve
Epoch 00147: val_acc did not improve
Epoch 00148: val_acc did not improve
Epoch 00149: val_acc did not improve

文件 weights.best.hdf5 將第140迭代時的模型權重保存.


3.加載保存模型

上面我們將訓練過程中最好的模型保存下來, 如果訓練有中斷, 那么我們可以直接采用本次模型.

代碼:

# -*- coding: utf-8 -*- # Load and use weights from a checkpoint from keras.models import Sequential from keras.layers import Dense from keras.callbacks import ModelCheckpoint import numpy as np import urllib url = "http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data" raw_data = urllib.urlopen(url) dataset = np.loadtxt(raw_data, delimiter=",") X = dataset[:, 0:8] y = dataset[:, 8] seed = 42 np.random.seed(seed) # create model model = Sequential() model.add(Dense(12, input_dim=8, init='uniform', activation='relu')) model.add(Dense(8, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) # load weights 加載模型權重 model.load_weights('weights.best.hdf5') # compile 編譯 model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print('Created model and loaded weights from hdf5 file') # estimate scores = model.evaluate(X, y, verbose=0) print("{0}: {1:.2f}%".format(model.metrics_names[1], scores[1]*100)) 

結果:

Created model and loaded weights from hdf5 file
acc: 74.74%

4.Sum

本次練習如何將神經網絡模型訓練過程中, 訓練效果最好的模型參數保存下來, 為以后的時候准備, 以備意外發生, 節省時間, 提高效率.


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM