簡單的卷積神經網絡(CNN)的搭建


卷積神經網絡(Convolutional Neural Network, CNN)是一種前饋神經網絡,它的人工神經元可以響應一部分覆蓋范圍內的周圍單元,對於大型圖像處理有出色表現。與普通神經網絡非常相似,它們都由具有可學習的權重和偏置常量(biases)的神經元組成。每個神經元都接收一些輸入,並做一些點積計算,輸出是每個分類的分數,普通神經網絡里的一些計算技巧到這里依舊適用。

卷積神經網絡通常包含以下幾種層:

  • 卷積層(Convolutional layer),卷積神經網路中每層卷積層由若干卷積單元組成,每個卷積單元的參數都是通過反向傳播算法優化得到的。卷積運算的目的是提取輸入的不同特征,第一層卷積層可能只能提取一些低級的特征如邊緣、線條和角等層級,更多層的網絡能從低級特征中迭代提取更復雜的特征。
  • 線性整流層(Rectified Linear Units layer, ReLU layer),這一層神經的活性化函數(Activation function)使用線性整流(Rectified Linear Units, ReLU)f(x)=max(0,x)。
  • 池化層(Pooling layer),通常在卷積層之后會得到維度很大的特征,將特征切成幾個區域,取其最大值或平均值,得到新的、維度較小的特征。
  • Drop out, 通常我們在訓練Covnets時,會隨機的丟棄一部分訓練獲得的參數,這樣可以在一定程度上來防止過度擬合
  • 全連接層( Fully-Connected layer), 把所有局部特征結合變成全局特征,用來計算最后每一類的得分。

下面是代碼部分,今天我將使用Covnets去完成一件非常非常簡單的圖像分類任務。這里我們將對 CIFAR-10 數據集 中的圖片進行分類。該數據集包含飛機、貓狗和其他物體。

首先,我們先獲得數據集 (或者直接從 https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz )這里直接下載

 1 from urllib.request import urlretrieve
 2 from os.path import isfile, isdir
 3 from tqdm import tqdm
 4 import tarfile
 5 
 6 cifar10_dataset_folder_path = 'cifar-10-batches-py'
 7 
 8 class DLProgress(tqdm):
 9     last_block = 0
10 
11     def hook(self, block_num=1, block_size=1, total_size=None):
12         self.total = total_size
13         self.update((block_num - self.last_block) * block_size)
14         self.last_block = block_num
15 
16 if not isfile(tar_gz_path):
17     with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
18         urlretrieve(
19             'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
20             tar_gz_path,
21             pbar.hook)
22 
23 if not isdir(cifar10_dataset_folder_path):
24     with tarfile.open(tar_gz_path) as tar:
25         tar.extractall()
26         tar.close()

在數據載入之后,我們需要對我們的圖片預處理下,因為現在的像素點是0-255之間,我們需要把圖片的像素點的值變成0-1之間,這樣方便在后面的計算

 1 def normalize(x):
 2     """
 3     Normalize a list of sample image data in the range of 0 to 1
 4     : x: List of image data.  The image shape is (32, 32, 3)
 5     : return: Numpy array of normalize data
 6     """
 7     a = 0
 8     b = 1
 9     grayscale_min = 0
10     grayscale_max = 255
11     return a + (((x - grayscale_min) * (b - a))/(grayscale_max - grayscale_min))

因為CIFAR數據集里面有10類不同的圖片,現在我們需要使用ONE-HOT的方法來給圖片打上標簽

 1 def one_hot_encode(x):
 2     """
 3     One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
 4     : x: List of sample Labels
 5     : return: Numpy array of one-hot encoded labels
 6     """
 7     d = {0:[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
 8      1:[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
 9      2:[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
10      3:[0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
11      4:[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
12      5:[0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
13      6:[0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
14      7:[0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
15      8:[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
16      9:[0, 0, 0, 0, 0, 0, 0, 0, 0, 1]}
17     
18     map_list = []
19     for item in x:
20         map_list.append(d[item])
21     target = np.array(map_list)
22     
23     return target

 

下面,我們就開始構建我們的Covnets了,首先,我們需要構建placeholder來儲存我們的訓練圖片,訓練數據的one-hot標簽的編碼以及我們dropout時候的概率值

 1 import tensorflow as tf
 2 
 3 def neural_net_image_input(image_shape):
 4     """
 5     Return a Tensor for a batch of image input
 6     : image_shape: Shape of the images
 7     : return: Tensor for image input.
 8     """
 9     x = tf.placeholder(tf.float32,[None, image_shape[0], image_shape[1],image_shape[2]],'x')
10     return x
11 
12 
13 def neural_net_label_input(n_classes):
14     """
15     Return a Tensor for a batch of label input
16     : n_classes: Number of classes
17     : return: Tensor for label input.
18     """
19     y = tf.placeholder(tf.float32,[None, n_classes],'y')
20     return y
21 
22 
23 def neural_net_keep_prob_input():
24     """
25     Return a Tensor for keep probability
26     : return: Tensor for keep probability.
27     """
28     keep_prob = tf.placeholder(tf.float32,None,'keep_prob')
29     return keep_prob

接着 我們來構建Covnets中最核心的 卷積層+最大池化層(這里我們用最大池化)

 1 def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
 2     """
 3     Apply convolution then max pooling to x_tensor
 4     :param x_tensor: TensorFlow Tensor
 5     :param conv_num_outputs: Number of outputs for the convolutional layer
 6     :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
 7     :param conv_strides: Stride 2-D Tuple for convolution
 8     :param pool_ksize: kernal size 2-D Tuple for pool
 9     :param pool_strides: Stride 2-D Tuple for pool
10     : return: A tensor that represents convolution and max pooling of x_tensor
11     """
12     ## Weights and Bias
13     weight = tf.Variable(tf.truncated_normal([conv_ksize[0],conv_ksize[1],
14                                               x_tensor.get_shape().as_list()[-1],conv_num_outputs],stddev=0.1))
15     bias = tf.Variable(tf.zeros(conv_num_outputs))
16     ## Apply Convolution
17     conv_layer = tf.nn.conv2d(x_tensor,weight,strides = [1,conv_strides[0],conv_strides[1],1], padding='SAME')
18     ## Add Bias
19     conv_layer = tf.nn.bias_add(conv_layer,bias)
20     ## Apply Relu
21     conv_layer = tf.nn.relu(conv_layer)
22     
23     return tf.nn.max_pool(conv_layer,
24                           ksize=[1,pool_ksize[0],pool_ksize[1],1],
25                           strides=[1,pool_strides[0],pool_strides[1],1],
26                           padding='SAME')

實現 flatten 層,將 x_tensor 的維度從四維張量(4-D tensor)變成二維張量。輸出應該是形狀(部分大小(Batch Size),扁平化圖片大小(Flattened Image Size))

 1 def flatten(x_tensor):
 2     """
 3     Flatten x_tensor to (Batch Size, Flattened Image Size)
 4     : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
 5     : return: A tensor of size (Batch Size, Flattened Image Size).
 6     """
 7     # Get the shape of tensor 
 8     shape = x_tensor.get_shape().as_list()
 9     # Compute the dim for image
10     dim = np.prod(shape[1:])
11     # reshape the tensor
12     
13     return tf.reshape(x_tensor, [-1,dim])

在網絡的最后一步,我們需要做一個全連接層 + 輸出層,然后輸出一個1*10的結果(10種結果的概率)

 1 def fully_conn(x_tensor, num_outputs):
 2     """
 3     Apply a fully connected layer to x_tensor using weight and bias
 4     : x_tensor: A 2-D tensor where the first dimension is batch size.
 5     : num_outputs: The number of output that the new tensor should be.
 6     : return: A 2-D tensor where the second dimension is num_outputs.
 7     """
 8     weight = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[-1], num_outputs],stddev=0.1))
 9     bias = tf.Variable(tf.zeros([num_outputs]))
10     
11     fc = tf.reshape(x_tensor,[-1, weight.get_shape().as_list()[0]])
12     fc = tf.add(tf.matmul(fc,weight), bias)
13     fc = tf.nn.relu(fc)
14     
15     return fc
16 
17 def output(x_tensor, num_outputs):
18     """
19     Apply a output layer to x_tensor using weight and bias
20     : x_tensor: A 2-D tensor where the first dimension is batch size.
21     : num_outputs: The number of output that the new tensor should be.
22     : return: A 2-D tensor where the second dimension is num_outputs.
23     """
24     
25     weight_out = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[-1],num_outputs],stddev=0.1))
26     bias_out = tf.Variable(tf.zeros([num_outputs]))
27     
28     out = tf.reshape(x_tensor, [-1, weight_out.get_shape().as_list()[0]])
29     out = tf.add(tf.matmul(out,weight_out),bias_out)
30     
31     return out

在我們都完成基本的元素之后,我們這個時候來構建我們的網絡

 1 def conv_net(x, keep_prob):
 2     """
 3     Create a convolutional neural network model
 4     : x: Placeholder tensor that holds image data.
 5     : keep_prob: Placeholder tensor that hold dropout keep probability.
 6     : return: Tensor that represents logits
 7     """
 8     
 9     conv1 = conv2d_maxpool(x, 32,(5,5),(2,2),(4,4),(2,2))
10     
11     conv2 = conv2d_maxpool(conv1, 128, (5,5),(2,2),(2,2),(2,2))
12     
13     conv3 = conv2d_maxpool(conv2, 256, (5,5),(2,2),(2,2),(2,2))
14     
15 
16     #   flatten(x_tensor)
17     
18     flatten_layer = flatten(conv3)
19 
20     #   fully_conn(x_tensor, num_outputs)
21     
22     fc = fully_conn(flatten_layer, 1024)
23     
24     #    Set this to the number of classes
25     # Function Definition from Above:
26     #   output(x_tensor, num_outputs)
27     
28     output_layer = output(fc, 10)
29     
30     return output_layer 
31 
32 
33 ##############################
34 ## Build the Neural Network ##
35 ##############################
36 
37 # Remove previous weights, bias, inputs, etc..
38 tf.reset_default_graph()
39 
40 # Inputs
41 x = neural_net_image_input((32, 32, 3))
42 y = neural_net_label_input(10)
43 keep_prob = neural_net_keep_prob_input()
44 
45 # Model
46 logits = conv_net(x, keep_prob)
47 
48 # Name logits Tensor, so that is can be loaded from disk after training
49 logits = tf.identity(logits, name='logits')
50 
51 # Loss and Optimizer
52 cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
53 optimizer = tf.train.AdamOptimizer().minimize(cost)
54 
55 # Accuracy
56 correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
57 accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

 

在網絡構建完成后,我們可以開始把我們的數據喂進去,訓練我們的模型了

這里我隨便設置下Hyper-Paramter

1 epochs = 30
2 batch_size = 256
3 keep_probability = 0.5

 

還需要設置下,在訓練的過程中,我們一直需要看到測試集的accuracy來觀測我們訓練的情況

 1 def print_stats(session, feature_batch, label_batch, cost, accuracy):
 2     """
 3     Print information about loss and validation accuracy
 4     : session: Current TensorFlow session
 5     : feature_batch: Batch of Numpy image data
 6     : label_batch: Batch of Numpy label data
 7     : cost: TensorFlow cost function
 8     : accuracy: TensorFlow accuracy function
 9     """
10     loss = sess.run(cost, feed_dict = {
11         x:feature_batch,
12         y:label_batch,
13         keep_prob:1.
14     })
15     
16     valid_acc = sess.run(accuracy,feed_dict = {
17         x:valid_features,
18         y:valid_labels,
19         keep_prob:1.
20     })
21     
22     print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
23                 loss,
24                 valid_acc))

模型訓練

 1 save_model_path = './image_classification'
 2 
 3 print('Training...')
 4 with tf.Session() as sess:
 5     # Initializing the variables
 6     sess.run(tf.global_variables_initializer())
 7     
 8     # Training cycle
 9     for epoch in range(epochs):
10         # Loop over all batches
11         n_batches = 5
12         for batch_i in range(1, n_batches + 1):
13             for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
14                 train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
15             print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
16             print_stats(sess, batch_features, batch_labels, cost, accuracy)
17             
18     # Save Model
19     saver = tf.train.Saver()
20     save_path = saver.save(sess, save_model_path)

貼上我在訓練的最后的驗證集的准確率

Epoch 29, CIFAR-10 Batch 4:  Loss:     0.0139 Validation Accuracy: 0.625600
Epoch 29, CIFAR-10 Batch 5:  Loss:     0.0090 Validation Accuracy: 0.631000
Epoch 30, CIFAR-10 Batch 1:  Loss:     0.0138 Validation Accuracy: 0.638800
Epoch 30, CIFAR-10 Batch 2:  Loss:     0.0192 Validation Accuracy: 0.627400
Epoch 30, CIFAR-10 Batch 3:  Loss:     0.0055 Validation Accuracy: 0.633400
Epoch 30, CIFAR-10 Batch 4:  Loss:     0.0114 Validation Accuracy: 0.641800
Epoch 30, CIFAR-10 Batch 5:  Loss:     0.0050 Validation Accuracy: 0.647400

還不錯,50%以上了,如果瞎猜 只有10%的

當然了,我們的模型的效率可以進一步提高,比如我們進一步去選擇更合適的超參數,或者加入一些其他的技巧。

http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130

這里有個鏈接,是大家利用這個數據集訓練的結果,現在最高的已經96.53%了,看看大佬們是怎么做的吧。。。。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM