課程四(Convolutional Neural Networks),第二 周(Deep convolutional models: case studies) —— 2.Programming assignments : Keras Tutorial - The Happy House (not graded)


Keras tutorial - the Happy House

Welcome to the first assignment of week 2. In this assignment, you will:

  1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
  2. See how you can in a couple of hours build a deep learning algorithm.

Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models.

In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!

 

【中文翻譯】

Keras 教程-Happy House
歡迎來到2第一次任務在此任務, :
  1、學習使用 Keras, 一個高級的神經網絡 API (編程框架), 編寫在 Python 中, 能夠在幾個低級框架 (包括 TensorFlow 和 CNTK) 的頂層上運行。
  2、看看如何在幾個小時內構建一個深入的學習算法。
 
我們為什么要用 Keras?Keras 的開發是為了讓深學習的工程師能夠很快地建立和試驗不同的模型。正如 TensorFlow 是一個比 Python 更高層次的框架, Keras 是一個更高層次的框架, 並提供了額外的抽象。能夠從想法到實現的過程中,盡可能少的延遲是找到好的模型的關鍵。但是, Keras 比低級框架更具限制性, 因此有一些非常復雜的模型可以在 TensorFlow 中實現, 但在 Keras 中卻沒有 (沒有更多的困難)。據說, Keras 將為許多常見的模型工作。

 

在本練習中, 您將處理 "Happy House" 問題, 下面我們將對此進行說明。我們加載解決Happy House問題!
 
【code】
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *

import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow

%matplotlib inline

  

Note: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: X = Input(...) or X = ZeroPadding2D(...).
 

1 - The Happy House

For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness.

 Figure 1 the Happy House

 
 

As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy.

You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled.

 

Run the following code to normalize the dataset and learn about its shapes.

 
【中文翻譯】
在你下一次的假期里, 你決定和五的同學一起度過一個星期。這是一個非常方便的房子, 有很多事情要做附近。但最重要的好處是, 每個人在家里的時候都保證過得快樂。因此, 任何想進入這所房子的人都必須證明他們目前的幸福狀態。
作為一個深入的學習專家, 為了確保 "快樂" 的規則得到嚴格的應用, 你要建立一個算法, 它使用從前門攝像頭的圖片來檢查人是否快樂。只有當人快樂時, 門才會打開。
你已經收集了你的朋友和你自己的照片, 由前門攝像頭拍攝。數據集是 labbeled 的。
 
【code】
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.

# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T

print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))

【result】

number of training examples = 600
number of test examples = 150
X_train shape: (600, 64, 64, 3)
Y_train shape: (600, 1)
X_test shape: (150, 64, 64, 3)
Y_test shape: (150, 1)

Details of the "Happy" dataset:

  • Images are of shape (64,64,3)
  • Training: 600 pictures
  • Test: 150 pictures

It is now time to solve the "Happy" Challenge.

  

2 - Building a model in Keras

Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.

Here is an example of a model in Keras:

# GRADED FUNCTION: HappyModel

def HappyModel(input_shape):
    """
    Implementation of the HappyModel.
    
    Arguments:
    input_shape -- shape of the images of the dataset

    Returns:
    model -- a Model() instance in Keras
    """
    
    ### START CODE HERE ###
    # Feel free to use the suggested outline in the text above to get started, and run through the whole
    # exercise (including the later portions of this notebook) once. The come back also try out other
    # network architectures as well. 
    # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
    
    ###Input ###
    # Input層參數解釋:
    #  (1)‘channels_first’模式下,輸入形如(samples,channels,rows,cols)的4D張量
    #  (2) ‘channels_last’模式下,輸入形如(samples,rows,cols,channels)的4D張量 
    X_input = Input(shape=input_shape)

    ###  Zero-Padding: pads the border of X_input with zeroes ###
    #  ZeroPadding2D 層參數解釋:
    # padding:整數tuple,表示在要填充的軸的起始和結束處填充0的數目
    X = ZeroPadding2D(padding=(3, 3))(X_input)

    ###CONV -> BN -> RELU Block applied to X ###
    #Conv2D參數解釋:
    #  (1)filters:卷積核的數目(即輸出的維度)
    # (2)kernel_size:單個整數或由兩個整數構成的list/tuple,卷積核的寬度和長度。如為單個整數,則表示在各個空間維度的相同長度。
    # (3) strides:單個整數或由兩個整數構成的list/tuple,為卷積的步長。如為單個整數,則表示在各個空間維度的相同步長。
    X = Conv2D(filters=32,kernel_size=(3, 3), strides = (1, 1), name = 'conv0')(X) 
    
    ###(批)規范化BatchNormalization :該層在每個batch上將前一層的激活值重新規范化,即使得其輸出數據的均值接近0,其標准差接近1###
    # BatchNormalization層參數解釋:
    # axis: 整數,指定要規范化的軸,通常為特征軸(此處我理解為channels對應的軸)。
    # 例如在進行data_format="channels_first"的2D卷積后,一般會設axis=1;例如在進行data_format="channels_last"的2D卷積后,一般會設axis=3
    X = BatchNormalization(axis = 3, name = 'bn0')(X)
    
    ###Activation層###
    # Activation層 參數解釋:
    # activation:將要使用的激活函數,為預定義激活函數名或一個Tensorflow/Theano的函數
    X = Activation('relu')(X)  

    ###MAXPOOL層 ###
    # MAXPOOL層參數解釋:
    # pool_size:整數或長為2的整數tuple,代表在兩個方向((vertical, horizontal))上縮小其維度,如取(2,2)將使圖片在兩個維度上均變為原長的一半。為整數意為兩個維度值相同且為該數字。
    X = MaxPooling2D(pool_size=(2, 2), name='max_pool')(X)  # pool_size=(2, 2) 表示將使圖片在rows,cols兩個維度上均變為原長的一半

    ###Flatten層###
    # Flatten層參數解釋:
    # FLATTEN X (means convert it to a vector) + FULLYCONNECTED
    X = Flatten()(X)  #Flatten層用來將輸入“壓平”,即把多維的輸入一維化
    
    ### Dense層 ###
    # Dense層參數解釋:
    # Dense就是常用的全連接層,所實現的運算是output = activation(dot(input, kernel)+bias)。其中activation是逐元素計算的激活函數,kernel是本層的權值矩陣,bias為偏置向量,只有當use_bias=True才會添加。
    #(1) units:大於0的整數,代表該層的輸出維度
    # (2)activation:激活函數,為預定義的激活函數名(參考激活函數),或逐元素(element-wise)的Theano函數。如果不指定該參數,將不會使用任何激活函數(即使用線性激活函數:a(x)=x)
    X = Dense(units=1, activation='sigmoid', name='fc')(X)   # 1 指代表該層的輸出維度

    # Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
    model = Model(inputs = X_input, outputs = X, name='HappyModel')
    
    ### END CODE HERE ###
    
    return model

 

Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as XZ1A1Z2A2, etc. for the computations for the different layers, in Keras code each line above just reassigns X to a new value using X = .... In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable X. The only exception was X_input, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (model = Model(inputs = X_input, ...) above).

Exercise: Implement a HappyModel(). This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as AveragePooling2D()GlobalMaxPooling2D()Dropout().

Note: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.

  

【中文翻譯】

2-在 Keras建立一個模型
Keras 非常適合快速成型 。在短短的時間內, 你將能夠建立一個模型, 取得優異的成績。
下面 Keras 一個模型示例:
代碼見英文部分

請注意, Keras 使用的是變量名稱有不同約定, 不像以前使用過的 numpy 和 TensorFlow。特別是, 而不是在向前傳播的每一步創建和分配一個新的變量, 為不同的層的計算如 x, Z1, A1, Z2, A2 等, 在 Keras 代碼中的每行只是重新 給X一個新的值,使用 X =...。換言之, 在向前傳播的每一步中, 我們只是將 計算得到的最新值寫入相同的變量 x。唯一的例外是 X_input, 我們保持分開並且沒有覆蓋, 因為我們需要它在最后創造 Keras 模型例子 (model = Model(inputs = X_input, ...) above)。  

練習: 實施 HappyModel ()。這項任務比大多數工作都更開放。我們建議您首先使用我們建議的體系結構來實現一個模型, 並使用它作為您的初始模型來完成余下的任務。但之后, 回來, 並采取主動嘗試其他模型架構。例如, 您可能會從上面的模型中獲得靈感, 但隨后會更改網絡體系結構和參數。您還可以使用其他函數, 如  AveragePooling2D()GlobalMaxPooling2D()Dropout()
 
 注意: 你必須小心你的數據形狀。使用您在視頻中所學到的內容, 確保卷積、池化和全連接層適應您所應用的圖形。
 
 【code】
# GRADED FUNCTION: HappyModel

def HappyModel(input_shape):
    """
    Implementation of the HappyModel.
    
    Arguments:
    input_shape -- shape of the images of the dataset

    Returns:
    model -- a Model() instance in Keras
    """
    
    ### START CODE HERE ###
    # Feel free to use the suggested outline in the text above to get started, and run through the whole
    # exercise (including the later portions of this notebook) once. The come back also try out other
    # network architectures as well. 
    # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
    
    X_input = Input(input_shape)

    X = Conv2D(filters=32, kernel_size=(3, 3), strides = (1, 1), padding='same', name = 'conv0')(X_input)
    X = BatchNormalization(axis = 3, name = 'bn0')(X)
    X = Activation('relu')(X) 

    X = MaxPooling2D(pool_size=(2, 2), name='max_pool0')(X)
    
    X = Flatten()(X)
    X = Dense(units=1, activation='sigmoid', name='fc')(X)
     
    # Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
    model = Model(inputs = X_input, outputs = X, name='HappyModel')
    
    ### END CODE HERE ###
    
    return model

  

You have now built a function to describe your model. To train and test this model, there are four steps in Keras:

  1. Create the model by calling the function above
  2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])
  3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)
  4. Test the model on test data by calling model.evaluate(x = ..., y = ...)

If you want to know more about model.compile()model.fit()model.evaluate() and their arguments, refer to the official Keras documentation.

 

Exercise: Implement step 1, i.e. create the model.

【code】

### START CODE HERE ### (1 line)
happyModel = HappyModel((64, 64, 3))
### END CODE HERE ###

 

Exercise: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of compile() wisely. Hint: the Happy Challenge is a binary classification problem.  

【code】

### START CODE HERE ### (1 line)
model.compile(optimizer = "sgd", loss = "binary_crossentropy", metrics = ["accuracy"]) #
### END CODE HERE ###

  

Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.   

【code】

### START CODE HERE ### (1 line)
model.fit(x = X_train, y = Y_train, epochs = 20, batch_size = 16)
### END CODE HERE ###

【result】

Epoch 1/20
600/600 [==============================] - 10s - loss: 1.7182 - acc: 0.6833    
Epoch 2/20
600/600 [==============================] - 9s - loss: 0.1429 - acc: 0.9450     
Epoch 3/20
600/600 [==============================] - 10s - loss: 0.1196 - acc: 0.9533    
Epoch 4/20
600/600 [==============================] - 10s - loss: 0.0764 - acc: 0.9750    
Epoch 5/20
600/600 [==============================] - 10s - loss: 0.0666 - acc: 0.9833    
Epoch 6/20
600/600 [==============================] - 10s - loss: 0.0672 - acc: 0.9750    
Epoch 7/20
600/600 [==============================] - 10s - loss: 0.0634 - acc: 0.9817    
Epoch 8/20
600/600 [==============================] - 10s - loss: 0.0391 - acc: 0.9867    
Epoch 9/20
600/600 [==============================] - 10s - loss: 0.0464 - acc: 0.9883    
Epoch 10/20
600/600 [==============================] - 10s - loss: 0.0749 - acc: 0.9700    
Epoch 11/20
600/600 [==============================] - 10s - loss: 0.0410 - acc: 0.9867    
Epoch 12/20
600/600 [==============================] - 10s - loss: 0.0640 - acc: 0.9783    
Epoch 13/20
600/600 [==============================] - 10s - loss: 0.0571 - acc: 0.9867    
Epoch 14/20
600/600 [==============================] - 10s - loss: 0.0475 - acc: 0.9883    
Epoch 15/20
600/600 [==============================] - 10s - loss: 0.0307 - acc: 0.9900    
Epoch 16/20
600/600 [==============================] - 10s - loss: 0.0198 - acc: 0.9900    
Epoch 17/20
600/600 [==============================] - 10s - loss: 0.0411 - acc: 0.9867    
Epoch 18/20
600/600 [==============================] - 10s - loss: 0.0240 - acc: 0.9883    
Epoch 19/20
600/600 [==============================] - 10s - loss: 0.0262 - acc: 0.9917    
Epoch 20/20
600/600 [==============================] - 10s - loss: 0.0247 - acc: 0.9933    
Out[30]:
<keras.callbacks.History at 0x7f540de4ef60>

  

Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.

 

Exercise: Implement step 4, i.e. test/evaluate the model.

【code】

### START CODE HERE ### (1 line)
preds = model.evaluate(x = X_test, y = Y_test)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))

【resutl】

150/150 [==============================] - 1s     

Loss = 0.167431052128
Test Accuracy = 0.94666667064

 

If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.

To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare.

【中文翻譯】

如果你的 happyModel () 函數有效, 在訓練和測試組上,你應該得到比隨機猜測 (50%) 的准確率更高的結果。
為了給您一個比較, 我們的模型在mini batch大小的16, "adam" 優化器,40次迭代時, 得到大約95% 測試准確性和99% 訓練准確性。但我們的模型在2-5迭代周期后得到了較好的准確性后, 所以如果你比較不同的模型, 你也可以訓練各種模型在短短幾個周期, 看看他們的效果。

 

If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it:

  • Try using blocks of CONV->BATCHNORM->RELU such as:
    X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) 
    until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.
  • You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.
  • Change your optimizer. We find Adam works well.
  • If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)
  • Run on more epochs, until you see the train accuracy plateauing.

Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results.

Note: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here.
 
【中文翻譯】
如果沒有達到一個非常精確度 (比方說超過 80%), 下面一些可以東西嘗試實現:

  • 嘗試使用CONV->BATCHNORM->RELU:
X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)

直到你的高度和寬度尺寸是相當低的, 你的通道數量相當大 (例如≈32)。您正在將圖片中有用的信息編碼到具有大量通道volume中。然后可以展開該volume並使用全連接層。

  • 您可以在這些塊之后使用 MAXPOOL。它將幫助您降低圖片的高度和寬度。
  • 更改優化程序。我們發現Adam工作得很好。
  • 如果模型運行的慢得到內存問題, 降低的 batch_size (12 通常一個很好妥協)
  • 迭代更多的周期, 直到你看到訓練精度停滯不前。
即使你已經達到了一個很好的准確性, 請自由的改進您的模型, 嘗試獲得更好的結果。

注意: 如果在模型上執行 超參數調整, 則測試集實際上將成為一個開發集, 並且您的模型可能在測試 (開發) 集過擬合。但就這項任務而言, 我們不會為此擔心。
 
 

3 - Conclusion

Congratulations, you have solved the Happy House challenge!

Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here.

 

What we would like you to remember from this assignment:

  • Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras?
  • Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test.
 

4 - Test with your own image (Optional)

Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that:

1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)!

The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!

 
測試一:測試笑的圖片
【code】
### START CODE HERE ###
img_path = 'images/happy_myself.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)

x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

print(happyModel.predict(x))

【result】

 

測試二:測試不開心的圖片

【code】

### START CODE HERE ###
img_path = 'images/my_image.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)

x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

print(happyModel.predict(x))

 

【result】

 

測試三:測試哭的圖片 (結果不准確,應該是訓練集里沒有這類的圖片,畢竟進happyHouse,沒有人在攝像頭前是大聲哭着的)

【code】

### START CODE HERE ###
img_path = 'images/cry_myself.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)

x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

print(happyModel.predict(x))

【result】  

 

 

測試四:測試開心到哭的圖片(這是一個比較極端的測試,結果也算是對了吧)

【code】

### START CODE HERE ###
img_path = 'images/cry_or_happy.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)

x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

print(happyModel.predict(x))

【result】

 

 

 

5 - Other useful functions in Keras (Optional)

Two other basic features of Keras that you'll find useful are:

  • model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs
  • plot_model(): plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.

Run the following code.

【code】

happyModel.summary()

【result】

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_7 (InputLayer)         (None, 64, 64, 3)         0         
_________________________________________________________________
conv0 (Conv2D)               (None, 64, 64, 32)        896       
_________________________________________________________________
bn0 (BatchNormalization)     (None, 64, 64, 32)        128       
_________________________________________________________________
activation_25 (Activation)   (None, 64, 64, 32)        0         
_________________________________________________________________
max_pool0 (MaxPooling2D)     (None, 32, 32, 32)        0         
_________________________________________________________________
flatten_5 (Flatten)          (None, 32768)             0         
_________________________________________________________________
fc (Dense)                   (None, 1)                 32769     
=================================================================
Total params: 33,793
Trainable params: 33,729
Non-trainable params: 64
_________________________________________________________________

 

【code】 

plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))

 

【result】

 

-------------------------------------------------------------------
參考:
  1、https://hub.coursera-notebooks.org/

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



猜您在找 課程四(Convolutional Neural Networks),第二 周(Deep convolutional models: case studies) ——3.Programming assignments : Residual Networks 課程四(Convolutional Neural Networks),第二 周(Deep convolutional models: case studies) —— 1.Practice questions 課程四(Convolutional Neural Networks),第四 周(Special applications: Face recognition & Neural style transfer) —— 3.Programming assignments:Face Recognition for the Happy House 課程四(Convolutional Neural Networks),第一周(Foundations of Convolutional Neural Networks) —— 2.Programming assignments:Convolutional Model: step by step 課程四(Convolutional Neural Networks),第三 周(Object detection) —— 2.Programming assignments:Car detection with YOLOv2 課程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第二周(Optimization algorithms) —— 2.Programming assignments:Optimization 課程一(Neural Networks and Deep Learning),第四周(Deep Neural Networks)——2.Programming Assignments: Building your Deep Neural Network: Step by Step 課程五(Sequence Models),第一 周(Recurrent Neural Networks) —— 2.Programming assignments:Dinosaur Island - Character-Level Language Modeling 課程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第三周(Hyperparameter tuning, Batch Normalization and Programming Frameworks) —— 2.Programming assignments 課程五(Sequence Models),第二 周(Natural Language Processing & Word Embeddings) —— 2.Programming assignments:Emojify
 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM