第四周編程
目標:建立一個深層的神經網絡識別貓
核心思想:
-
正向傳播
-
反向傳播
需要注意的事正反向傳播的初始值, ,
,
數據集:與第一個編程作業的數據集一樣
代碼流程:
-
根據神經網絡結構初始化參數(W,b)
-
將單元函數寫出來(linear,sigmoid,relu,sigmoid_backward,relu_backward)
-
正反向傳播,輸出梯度
-
單步梯度下降更新參數
-
預測函數
-
建立整合函數
代碼,穿插詳解與思路
第16行:隨機初始化,將W1=(n1,n0)的矩陣存入字典parameters['W1']中
注意:隨機初始化時時* np.sqrt(2/layers_dims[l-1]),並不是吳恩達所說的*0.01
這是因為,這是一個多隱藏層的網絡,當*0.01時,反向傳播幾次之后會使得后面的深層網絡參數值變成0
詳細原因可看這四篇文章:
https://blog.csdn.net/shwan_ma/article/details/76257967
https://www.cnblogs.com/makefile/p/init-weight.html?utm_source=itdadao&utm_medium=referral
https://blog.csdn.net/marsggbo/article/details/77771497
https://blog.csdn.net/u013082989/article/details/53770851
(被吳恩達給坑了,一開始*0.01,cost一直不變。搞了好長時間)
21-38行:將單元函數列在這里,是因為我想將這個model標准化,你可以在這里添加你想添加的單元函數, 比如tanh函數,記得添加了一個tanh函數時,還要添加一個對應的tanh_backward函數。有正向傳播,就要有反向傳播
41-71行:propagate傳播函數。這里代碼看起來復雜,實際上很容易理解。正向傳播就是一條路走到成本函數。第l層,正向傳播先計算出Z[l],再根據選擇的激活函數計算A[l]
反向傳播,先看一下核心思想中反向傳播。先根據激活函數,將dAl,和dZl傳入反向傳播計算的單元函數中。然后再一步一步計算出dW,db。假設L=5,那么左后一層的A的激活函數是dA4,從dA[L-l]開始反向傳播計算
propagate函數傳出Y_p,是因為不想在預測函數中再寫一次正向傳播,所以添加了一個Y_p
122,123行:兩個超參數輸入。122行輸入的是layers_dims 即你想建立的神經網絡結構。
如:你想建立幾個隱藏層,每一層的隱藏層有幾個神經節點。通過一個列表組裝
123行輸入的是每一層的激活函數。由於我的單元函數只有relu和sigmoid函數,所以只有這兩個關鍵字可選進去
ps:通過列表組裝數據很不穩,一不小心就會傳錯參數。最好的方法是傳一個字典進去如
當然,中間調用layers_dims,activations的代碼也要發生點變化,人懶,不想改了
最后的分析步驟也就是,將識別錯誤的圖片打印出來而已
-
import numpy as np
-
import h5py
-
import matplotlib.pyplot as plt
-
import lr_utils
-
import testCases_v2
-
plt.rcParams['figure.figsize']=(5.0,4.0)
-
plt.rcParams['image.interpolation']='nearest'
-
plt.rcParams['image.cmap']='gray'
-
#上面三句是設置圖形的默認格式
-
np.random.seed(1)
-
產生可預測#隨機值
-
def initialize_parameters(layers_dims):
-
np.random.seed(3)
-
parameters={}
-
for l in range(1,len(layers_dims)):
-
parameters['W'+str(l)]=np.random.randn(layers_dims[l],layers_dims[l-1])*np.sqrt(2/layers_dims[l-1])
-
parameters['b'+str(l)]=np.zeros(shape=(layers_dims[l],1))
-
return parameters
-
-
# 單元函數
-
def linear(A,W,b):
-
Z=np.dot(W,A)+b
-
return Z
-
def sigmoid(Z):
-
A=1/(1+np.exp(-Z))
-
return A
-
def relu(Z):
-
A=np.maximum(0,Z)
-
return A
-
-
def sigmoid_backword(dA,Z):
-
s=1/(1+np.exp(-Z))
-
dZ=dA*s*(1-s)
-
return dZ
-
def relu_backword(dA,Z):
-
dZ=dA.copy()
-
dZ[Z<=0]=0
-
return dZ
-
-
#正反向傳播
-
def propagate(X,Y,parameters,layers_dims,activations):
-
m=X.shape[1]
-
L=len(layers_dims)
-
-
#正向傳播
-
caches,grads={'A0':X},{}
-
for l in range(1,L):
-
caches['Z'+str(l)]=linear(caches['A'+str(l-1)],parameters['W'+str(l)],parameters['b'+str(l)])
-
if activations[l-1]=='sigmoid':
-
caches['A'+str(l)]=sigmoid(caches['Z'+str(l)])
-
if activations[l-1]=='relu':
-
caches['A'+str(l)]=relu(caches['Z'+str(l)])
-
Y_p=caches['A'+str(L-1)]
-
-
#成本函數
-
cost=-np.sum(Y*np.log(Y_p)+(1-Y)*np.log(1-Y_p))/m
-
cost=np.squeeze(cost)
-
-
-
#反向傳播
-
grads['dA'+str(L-1)]=-(Y/Y_p)+(1-Y)/(1-Y_p)
-
for l in range(1,L):
-
if activations[-l]=='sigmoid':
-
grads['dZ'+str(L-l)]=sigmoid_backword(grads['dA'+str(L-l)],caches['Z'+str(L-l)])
-
if activations[-l]=='relu':
-
grads['dZ'+str(L-l)]=relu_backword(grads['dA'+str(L-l)],caches['Z'+str(L-l)])
-
grads['dW'+str(L-l)]=np.dot(grads['dZ'+str(L-l)],(caches['A'+str(L-l-1)]).T)/m
-
grads['db'+str(L-l)]=np.sum(grads['dZ'+str(L-l)],axis=1,keepdims=True)/m
-
grads['dA'+str(L-l-1)]=np.dot((parameters['W'+str(L-l)]).T,grads['dZ'+str(L-l)])
-
-
return grads,Y_p,cost
-
-
#更新參數
-
def update_parameters(parameters,grads,learning_rate,layers_dims):
-
L=len(layers_dims)
-
for l in range(1,L):
-
parameters['W'+str(l)]=parameters['W'+str(l)]-learning_rate*grads['dW'+str(l)]
-
parameters['b'+str(l)]=parameters['b'+str(l)]-learning_rate*grads['db'+str(l)]
-
return parameters
-
-
-
-
-
#預測函數
-
def predict(X,Y,parameters,layers_dims,activations):
-
m=X.shape[1]
-
grads,Y_p,cost=propagate(X,Y,parameters,layers_dims,activations)
-
Y_p = np.round(Y_p)
-
print('准確度:'+str(float(np.sum((Y_p == Y))/m)))
-
return Y_p
-
-
#模型組合
-
def model3(X,Y,num_iterations,learning_rate,layers_dims,activations,print_cost=False,isplot=True):
-
np.random.seed(1)
-
parameters=initialize_parameters(layers_dims)
-
L=len(layers_dims)
-
costs=[]
-
for i in range(num_iterations):
-
grads,Y_p,cost=propagate(X,Y,parameters,layers_dims,activations)
-
parameters=update_parameters(parameters,grads,learning_rate,layers_dims)
-
if i%100 == 0:
-
costs.append(cost)
-
if print_cost:
-
print('after iteration of %d cost:%f'%(i,cost))
-
if isplot:
-
plt.plot(np.squeeze(costs))
-
plt.ylabel('cost')
-
plt.title('Learning rate ='+str(learning_rate))
-
plt.show()
-
return parameters
-
-
-
train_set_x_orig , train_set_y , test_set_x_orig , test_set_y , classes = lr_utils.load_dataset()
-
-
train_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
-
test_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
-
-
train_x = train_x_flatten / 255
-
train_y = train_set_y
-
test_x = test_x_flatten / 255
-
test_y = test_set_y
-
layers_dims = [12288,20,7,5,1]
-
activations=['relu','relu','relu','sigmoid']
-
parameters=model3(train_x,train_y,num_iterations=2500,learning_rate=0.0075,layers_dims=layers_dims,activations=activations,print_cost=True,isplot=False)
-
Y_p=predict(test_x,test_y,parameters,layers_dims,activations)
-
-
#分析
-
def print_mislabeled_images(classes,X,y,p):
-
a=p+y
-
mislabeled_indices=np.asarray(np.where(a==1))
-
plt.rcParams['figure.figsize'] = (40.0, 40.0)
-
num_images = len(mislabeled_indices[0])
-
for i in range(num_images):
-
index = mislabeled_indices[1][i]
-
-
plt.subplot(2, num_images, i + 1)
-
plt.imshow(X[:,index].reshape(64,64,3), interpolation='nearest')
-
plt.axis('off')
-
plt.title("Prediction: " + classes[int(p[0,index])].decode("utf-8") + " \n Class: " + classes[y[0,index]].decode("utf-8"))
-
print_mislabeled_images(classes,test_x,test_y,Y_p)