在之前的一章中我們講到的keras手寫數字集的識別中,所使用的loss function為‘mse’,即均方差。那我們如何才能知道所得出的結果是不是overfitting?我們通過運行結果中的training和testing即可得知。
源代碼與運行截圖如下:
#!/usr/bin/env python # -*- coding: utf-8 -*- # @Time : 2019/9/9 13:23 # @Author : BaoBao # @Mail : baobaotql@163.com # @File : test5.py # @Software: PyCharm
import numpy as np from keras.models import Sequential #序貫模型
from keras.layers.core import Dense,Dropout,Activation from keras.optimizers import SGD,Adam from keras.utils import np_utils from keras.datasets import mnist def load_data(): (x_train,y_train),(x_test,y_test)=mnist.load_data() #載入數據
number=10000 x_train=x_train[0:number] y_train=y_train[0:number] x_train=x_train.reshape(number,28*28) x_test=x_test.reshape(x_test.shape[0],28*28) x_train=x_train.astype('float32') #astype轉換數據類型
x_test=x_test.astype('float32') y_train=np_utils.to_categorical(y_train,10) y_test=np_utils.to_categorical(y_test,10) x_train=x_train x_test=x_test x_train=x_train/255 #歸一化到0-1區間 變為只有0 1的矩陣
x_test=x_test/255
return (x_train,y_train),(x_test,y_test) (x_train,y_train),(x_test,y_test)=load_data() model=Sequential() model.add(Dense(input_dim=28*28,units=689,activation='sigmoid')) model.add(Dense(units=689,activation='sigmoid')) model.add(Dense(units=689,activation='sigmoid')) model.add(Dense(units=10,activation='softmax')) model.compile(loss='mse',optimizer=SGD(lr=0.1),metrics=['accuracy']) # #model.compile(loss='categorical_crossentropy',optimizer=SGD(lr=0.1),metrics=['accuracy'])
#train 模型
model.fit(x_train,y_train,batch_size=100,epochs=20) #測試結果 並打印accuary
result= model.evaluate(x_train,y_train,batch_size=10000) print('\nTRAIN ACC :',result[1]) result= model.evaluate(x_test,y_test,batch_size=10000) # print('\nTest loss:', result[0]) # print('\nAccuracy:', result[1])
print('\nTEST ACC :',result[1])
運行截圖:
通過圖片中的運行結果我們可以發現。訓練結果中在training data上的准確率為0.1127,在testing data上的准確率為0.1134
雖然准確率不夠高,但是這其中的train和test的准確率相差無幾,所以這並不是overfitting問題。這其實就是模型的建立問題。
考慮更換loss function。原loss function 為 mse 更換為'categorical_crossentropy'然后觀察訓練結果。
源代碼(只修改了loss):
#!/usr/bin/env python # -*- coding: utf-8 -*- # @Time : 2019/9/9 13:23 # @Author : BaoBao # @Mail : baobaotql@163.com # @File : test5.py # @Software: PyCharm
import numpy as np from keras.models import Sequential #序貫模型
from keras.layers.core import Dense,Dropout,Activation from keras.optimizers import SGD,Adam from keras.utils import np_utils from keras.datasets import mnist def load_data(): (x_train,y_train),(x_test,y_test)=mnist.load_data() #載入數據
number=10000 x_train=x_train[0:number] y_train=y_train[0:number] x_train=x_train.reshape(number,28*28) x_test=x_test.reshape(x_test.shape[0],28*28) x_train=x_train.astype('float32') #astype轉換數據類型
x_test=x_test.astype('float32') y_train=np_utils.to_categorical(y_train,10) y_test=np_utils.to_categorical(y_test,10) x_train=x_train x_test=x_test x_train=x_train/255 #歸一化到0-1區間 變為只有0 1的矩陣
x_test=x_test/255
return (x_train,y_train),(x_test,y_test) (x_train,y_train),(x_test,y_test)=load_data() model=Sequential() model.add(Dense(input_dim=28*28,units=689,activation='sigmoid')) model.add(Dense(units=689,activation='sigmoid')) model.add(Dense(units=689,activation='sigmoid')) model.add(Dense(units=10,activation='softmax')) #model.compile(loss='mse',optimizer=SGD(lr=0.1),metrics=['accuracy']) # model.compile(loss='categorical_crossentropy',optimizer=SGD(lr=0.1),metrics=['accuracy']) #train 模型
model.fit(x_train,y_train,batch_size=100,epochs=20) #測試結果 並打印accuary
result= model.evaluate(x_train,y_train,batch_size=10000) print('\nTRAIN ACC :',result[1]) result= model.evaluate(x_test,y_test,batch_size=10000) # print('\nTest loss:', result[0]) # print('\nAccuracy:', result[1])
print('\nTEST ACC :',result[1])
運行截圖:
deep layer
考慮使hidden layer更深一些
for _ in range(10): model.add(Dense(units=689,activation='sigmoid'))
結果不是很理想呢.....
normalize
現在的圖片是有進行normalize,每個pixel我們用一個0-1之間的值進行表示,那么我們不進行normalize,把255拿掉會怎樣呢?
#注釋掉
# x_train=x_train/255
# x_test=x_test/255
你會發現你又做不起來了,所以這種小小的地方,只是有沒有做normalizion,其實對你的結果會有關鍵性影響。
optimizer
修改優化器optimizer,將SGD修改為Adam,然后再去跑一次,你會發現,用adam的時候最后不收斂的地方查不到,但是上升的速度變快。
源代碼不貼了,就是修改了optimizer
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
你會驚奇的發現!wow!train accuracy居然達到100%!而且test accuracy也表現的不錯~
運行截圖:
Random noise
添加噪聲數據,看看結果會掉多少?完整代碼不貼了QAQ
x_test = np.random.normal(x_test)
可以看出train的結果是ok的 但是test不太行,出現了overfitting!
運行截圖:
dropout
#dropout 就是在每一個隱藏層后面都dropout一下 model.add(Dense(input_dim=28*28,units=689,activation='relu')) model.add(Dropout(0.7)) model.add(Dense(units=689,activation='relu')) model.add(Dropout(0.7)) model.add(Dense(units=689,activation='relu')) model.add(Dropout(0.7)) model.add(Dense(units=10,activation='softmax'))
要知道dropout加入之后,train的效果會變差,然而test的正確率提升了