Keras貓狗大戰四:數據增強+添加dropout層,精度達83%


 版權聲明:本文為博主原創文章,歡迎轉載,並請注明出處。聯系方式:460356155@qq.com

 對數據量較少的深度學習,為了避免過擬合,可以對訓練數據進行增強及添加Dropout層。

對訓練數據進行變換增強:

train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True, )

訓練模型添加Dropout層:

model = models.Sequential()

# 輸出圖片尺寸:150-3+1=148*148,參數數量:32*3*3*3+32=896
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))  # 輸出圖片尺寸:148/2=74*74

# 輸出圖片尺寸:74-3+1=72*72,參數數量:64*3*3*32+64=18496
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))  # 輸出圖片尺寸:72/2=36*36

# 輸出圖片尺寸:36-3+1=34*34,參數數量:128*3*3*64+128=73856
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))  # 輸出圖片尺寸:34/2=17*17

# 輸出圖片尺寸:17-3+1=15*15,參數數量:128*3*3*128+128=147584
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))  # 輸出圖片尺寸:15/2=7*7

#  多維轉為一維:7*7*128=6272
model.add(layers.Flatten())

# 添加Dropout層
model.add(layers.Dropout(0.5))

#  參數數量:6272*512+512=3211776
model.add(layers.Dense(512, activation='relu'))

#  參數數量:512*1+1=513
model.add(layers.Dense(1, activation='sigmoid'))

 

訓練100次迭代:

history = model.fit_generator(
    train_generator,
    steps_per_epoch=train_generator.samples // batch_size,
    epochs=100,
    validation_data=validation_generator,
    validation_steps=validation_generator.samples // batch_size)
Epoch 1/100
62/62 [==============================] - 42s 674ms/step - loss: 0.6945 - acc: 0.5055 - val_loss: 0.6876 - val_acc: 0.5030
Epoch 2/100
62/62 [==============================] - 42s 680ms/step - loss: 0.6879 - acc: 0.5383 - val_loss: 0.6764 - val_acc: 0.6116
Epoch 3/100
62/62 [==============================] - 42s 682ms/step - loss: 0.6810 - acc: 0.5544 - val_loss: 0.6566 - val_acc: 0.6054
Epoch 4/100
62/62 [==============================] - 42s 685ms/step - loss: 0.6683 - acc: 0.5902 - val_loss: 0.6454 - val_acc: 0.5950
Epoch 5/100
......
Epoch 95/100
62/62 [==============================] - 41s 661ms/step - loss: 0.4641 - acc: 0.7792 - val_loss: 0.4375 - val_acc: 0.8027
Epoch 96/100
62/62 [==============================] - 42s 670ms/step - loss: 0.4336 - acc: 0.8075 - val_loss: 0.4010 - val_acc: 0.8233
Epoch 97/100
62/62 [==============================] - 42s 680ms/step - loss: 0.4359 - acc: 0.7933 - val_loss: 0.4201 - val_acc: 0.8185
Epoch 98/100
62/62 [==============================] - 43s 689ms/step - loss: 0.4559 - acc: 0.7878 - val_loss: 0.4266 - val_acc: 0.8130
Epoch 99/100
62/62 [==============================] - 42s 676ms/step - loss: 0.4267 - acc: 0.7994 - val_loss: 0.4054 - val_acc: 0.8161
Epoch 100/100
62/62 [==============================] - 43s 690ms/step - loss: 0.4413 - acc: 0.8009 - val_loss: 0.4077 - val_acc: 0.8192

訓練曲線:

用測試集對模型進行測試:

test_generator = test_datagen.flow_from_directory(
    test_dir,
    target_size=(150, 150),
    batch_size=batch_size,
    class_mode='binary')

test_loss, test_acc = model.evaluate_generator(test_generator, steps=test_generator.samples // batch_size)
print('test acc:', test_acc)
Found 400 images belonging to 2 classes.
test acc: 0.83

 混淆矩陣:

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM