3D卷積,代碼實現


3D卷積,代碼實現

三維卷積:理解+用例-發現

在圖像卷積神經網絡內核中,包含3D卷積及其在3D MNIST數據集上的實現。

 

什么是卷積?

 

從數學上講,卷積是一種積分函數,表示一個函數g在另一個函數f上移動時的重疊量。

 

直覺地說,卷積就像一個混合器,將一個函數與另一個函數混合在一起,在保留信息的同時,減少數據空間。

 

在神經網絡和深度學習方面:

 

卷積是具有可訓練參數的濾波器(矩陣/向量),用於從輸入數據中提取低維特征。

 

具有保留輸入數據點之間的空間或位置關系的特性。

 

卷積神經網絡增強相鄰層神經元之間的局部連接模式,空間局部相關性。

 

直觀地說,卷積是在輸入上應用滑動窗口(具有可訓練權重的濾波器)的概念,產生(權重和輸入的)加權和輸出的步驟。加權下一層輸入的特征空間。

 

例如,在人臉識別問題中,前幾個卷積層訓練輸入圖像中關鍵點,下一個卷積層訓練邊緣和形狀,最后一個卷積層訓練人臉。在本例中,首先將輸入空間縮減為較低維空間(表示關於點/像素的信息),然后將該空間縮減為包含(邊/形狀)的另一空間,最后縮減對圖像中的面進行分類。卷積可以應用於N維。

卷積類型:

 

讓我們來討論什么是不同類型的卷積

 

1D Convolutions 一維卷積

 

最簡單的卷積是一維卷積,通常用於序列數據集(也可以用於其它用例)。可用於從輸入序列中提取局部一維子序列,並在卷積窗口內識別局部模式。下圖顯示了如何將一維卷積濾波器應用於序列,獲得新特征。一維卷積的其它常見用法見於NLP領域,其中每個句子都表示為一個單詞序列。

2D Convolutions 二維卷積

在圖像數據集上,CNN結構中主要使用二維卷積濾波器。二維卷積的主要思想,卷積濾波器在兩個方向(x,y)上移動,從圖像數據中計算低維特征。輸出形狀也是一個二維矩陣。

 

3D Convolutions 三維卷積

三維卷積對數據集應用三維過濾器,過濾器向3個方向(x,y,z)移動,計算低層特征表示。輸出形狀是一個三維體積空間,如立方體或長方體。有助於視頻、三維醫學圖像等的目標物檢測。不僅限於三維空間,還可以應用於二維空間輸入,如圖像。

 

在3D Mnist數據集上實現3D CNN。首先,導入key數據庫。

 

此外,還有其他類型的卷積:

Dilated Convolutions 膨脹卷積/空洞卷積

擴展或高級卷積定義了內核中值之間的間距。在這種類型的卷積中,由於間距的原因,內核的可接受視圖增加,例如,一個3x3內核的膨脹率為2,視野將與5x5內核相同。復雜性保持不變,但在這種情況下會生成不同的特征。

在3D mnist數據集上,創建一個3D卷積神經網絡結構。

 

 

from keras.layers import Conv3D, MaxPool3D, Flatten, Dense

from keras.layers import Dropout, Input, BatchNormalization

from sklearn.metrics import confusion_matrix, accuracy_score

from plotly.offline import iplot, init_notebook_mode

from keras.losses import categorical_crossentropy

from keras.optimizers import Adadelta

import plotly.graph_objs as go

from matplotlib.pyplot import cm

from keras.models import Model

import numpy as np

import keras

import h5py

 

init_notebook_mode(connected=True)

%matplotlib inline

 

使用 TensorFlow backend后端.

3D MNIST數據以.h5格式給出,將完整的數據集加載到訓練集和測試集中。

with h5py.File('../input/full_dataset_vectors.h5', 'r') as dataset:
    x_train = dataset["X_train"][:]
    x_test = dataset["X_test"][:]
    y_train = dataset["y_train"][:]
    y_test = dataset["y_test"][:]

 

觀察數據集維度:

print ("x_train shape: ", x_train.shape)

print ("y_train shape: ", y_train.shape)

 

print ("x_test shape:  ", x_test.shape)

print ("y_test shape:  ", y_test.shape)

x_train shape:  (10000, 4096)

y_train shape:  (10000,)

x_test shape:   (2000, 4096)

y_test shape:   (2000,)

 

這個數據集是一個平面的一維數據,在一個單獨的數據文件中共享了原始的x,y,z。在三維空間中繪制一個數字。旋轉三維數字,查看效果。

 

 

 現在,在這個數據集上實現一個三維卷積神經網絡。為了使用二維卷積,首先將每個圖像轉換成三維形狀:寬度、高度、通道。通道表示紅色、綠色和藍色層的切片。以類似的方式,將輸入數據集轉換為4D形狀,以便使用三維卷積:長度、寬度、高度、通道(r/g/b)。

## Introduce the channel dimention in the input dataset 
xtrain = np.ndarray((x_train.shape[0], 4096, 3))
xtest = np.ndarray((x_test.shape[0], 4096, 3))
 
## iterate in train and test, add the rgb dimention 
def add_rgb_dimention(array):
    scaler_map = cm.ScalarMappable(cmap="Oranges")
    array = scaler_map.to_rgba(array)[:, : -1]
    return array
for i in range(x_train.shape[0]):
    xtrain[i] = add_rgb_dimention(x_train[i])
for i in range(x_test.shape[0]):
    xtest[i] = add_rgb_dimention(x_test[i])
 
## convert to 1 + 4D space (1st argument represents number of rows in the dataset)
xtrain = xtrain.reshape(x_train.shape[0], 16, 16, 16, 3)
xtest = xtest.reshape(x_test.shape[0], 16, 16, 16, 3)
 
## convert target variable into one-hot
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

y_train.shape

(10000, 10)

Lets create the model architecture. The architecture is described below:

 

Input and Output layers:

 

One Input layer with dimentions 16, 16, 16, 3

Output layer with dimentions 10

Convolutions :

 

Apply 4 Convolutional layer with increasing order of filter size (standard size : 8, 16, 32, 64) and fixed kernel size = (3, 3, 3)

Apply 2 Max Pooling layers, one after 2nd convolutional layer and one after fourth convolutional layer.

MLP architecture:

 

Batch normalization on convolutiona architecture

Dense layers with 2 layers followed by dropout to avoid overfitting

## input layer

input_layer = Input((16, 16, 16, 3))

 

## convolutional layers

conv_layer1 = Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu')(input_layer)

conv_layer2 = Conv3D(filters=16, kernel_size=(3, 3, 3), activation='relu')(conv_layer1)

 

## add max pooling to obtain the most imformatic features

pooling_layer1 = MaxPool3D(pool_size=(2, 2, 2))(conv_layer2)

 

conv_layer3 = Conv3D(filters=32, kernel_size=(3, 3, 3), activation='relu')(pooling_layer1)

conv_layer4 = Conv3D(filters=64, kernel_size=(3, 3, 3), activation='relu')(conv_layer3)

pooling_layer2 = MaxPool3D(pool_size=(2, 2, 2))(conv_layer4)

 

## perform batch normalization on the convolution outputs before feeding it to MLP architecture

pooling_layer2 = BatchNormalization()(pooling_layer2)

flatten_layer = Flatten()(pooling_layer2)

 

## create an MLP architecture with dense layers : 4096 -> 512 -> 10

## add dropouts to avoid overfitting / perform regularization

dense_layer1 = Dense(units=2048, activation='relu')(flatten_layer)

dense_layer1 = Dropout(0.4)(dense_layer1)

dense_layer2 = Dense(units=512, activation='relu')(dense_layer1)

dense_layer2 = Dropout(0.4)(dense_layer2)

output_layer = Dense(units=10, activation='softmax')(dense_layer2)

 

## define the model with input layer and output layer

model = Model(inputs=input_layer, outputs=output_layer)

Compile the model and start training.

 

model.compile(loss=categorical_crossentropy, optimizer=Adadelta(lr=0.1), metrics=['acc'])

model.fit(x=xtrain, y=y_train, batch_size=128, epochs=50, validation_split=0.2)

Train on 8000 samples, validate on 2000 samples

Epoch 1/50

8000/8000 [==============================] - 8s 1ms/step - loss: 2.1643 - acc: 0.2400 - val_loss: 4.1364 - val_acc: 0.1595

Epoch 2/50

8000/8000 [==============================] - 3s 389us/step - loss: 1.7002 - acc: 0.4255 - val_loss: 2.6611 - val_acc: 0.2830

Epoch 3/50

8000/8000 [==============================] - 3s 389us/step - loss: 1.3900 - acc: 0.5319 - val_loss: 1.7843 - val_acc: 0.4425

Epoch 4/50

8000/8000 [==============================] - 3s 390us/step - loss: 1.2224 - acc: 0.5872 - val_loss: 2.4387 - val_acc: 0.3545

Epoch 5/50

8000/8000 [==============================] - 3s 393us/step - loss: 1.1250 - acc: 0.6149 - val_loss: 1.6011 - val_acc: 0.4820

Epoch 6/50

8000/8000 [==============================] - 3s 386us/step - loss: 1.0584 - acc: 0.6379 - val_loss: 1.9631 - val_acc: 0.3940

Epoch 7/50

8000/8000 [==============================] - 3s 385us/step - loss: 1.0012 - acc: 0.6509 - val_loss: 2.7977 - val_acc: 0.3435

Epoch 8/50

8000/8000 [==============================] - 3s 385us/step - loss: 0.9556 - acc: 0.6706 - val_loss: 1.3028 - val_acc: 0.5515

Epoch 9/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.9101 - acc: 0.6893 - val_loss: 1.3699 - val_acc: 0.5525

Epoch 10/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.8759 - acc: 0.7000 - val_loss: 1.5005 - val_acc: 0.5080

Epoch 11/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.8387 - acc: 0.7126 - val_loss: 1.4767 - val_acc: 0.5215

Epoch 12/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.8098 - acc: 0.7246 - val_loss: 1.6518 - val_acc: 0.5250

Epoch 13/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.7806 - acc: 0.7324 - val_loss: 1.2170 - val_acc: 0.5900

Epoch 14/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.7584 - acc: 0.7442 - val_loss: 1.3042 - val_acc: 0.5840

Epoch 15/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.7239 - acc: 0.7542 - val_loss: 1.0767 - val_acc: 0.6480

Epoch 16/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.6997 - acc: 0.7602 - val_loss: 1.1681 - val_acc: 0.6200

Epoch 17/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.6756 - acc: 0.7702 - val_loss: 1.1535 - val_acc: 0.6295

Epoch 18/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.6450 - acc: 0.7759 - val_loss: 1.3781 - val_acc: 0.5975

Epoch 19/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.6229 - acc: 0.7927 - val_loss: 1.2891 - val_acc: 0.6145

Epoch 20/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.6027 - acc: 0.7996 - val_loss: 1.2839 - val_acc: 0.6060

Epoch 21/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.5727 - acc: 0.8088 - val_loss: 1.7544 - val_acc: 0.5350

Epoch 22/50

8000/8000 [==============================] - 3s 387us/step - loss: 0.5555 - acc: 0.8151 - val_loss: 1.3720 - val_acc: 0.5965

Epoch 23/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.5308 - acc: 0.8246 - val_loss: 1.2582 - val_acc: 0.6400

Epoch 24/50

8000/8000 [==============================] - 3s 394us/step - loss: 0.5077 - acc: 0.8286 - val_loss: 1.3886 - val_acc: 0.6085

Epoch 25/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.4869 - acc: 0.8400 - val_loss: 1.2946 - val_acc: 0.6315

Epoch 26/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.4634 - acc: 0.8512 - val_loss: 1.3686 - val_acc: 0.6220

Epoch 27/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.4487 - acc: 0.8529 - val_loss: 1.8458 - val_acc: 0.5635

Epoch 28/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.4297 - acc: 0.8616 - val_loss: 1.7958 - val_acc: 0.5485

Epoch 29/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.4067 - acc: 0.8669 - val_loss: 1.2551 - val_acc: 0.6475

Epoch 30/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.3832 - acc: 0.8762 - val_loss: 1.4216 - val_acc: 0.6190

Epoch 31/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.3730 - acc: 0.8790 - val_loss: 1.3635 - val_acc: 0.6335

Epoch 32/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.3535 - acc: 0.8840 - val_loss: 1.6396 - val_acc: 0.6040

Epoch 33/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.3298 - acc: 0.8970 - val_loss: 1.5481 - val_acc: 0.6355

Epoch 34/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.3281 - acc: 0.8912 - val_loss: 1.7711 - val_acc: 0.5945

Epoch 35/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.3013 - acc: 0.9031 - val_loss: 1.7350 - val_acc: 0.5885

Epoch 36/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.2862 - acc: 0.9096 - val_loss: 2.2285 - val_acc: 0.5195

Epoch 37/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.2735 - acc: 0.9150 - val_loss: 1.8348 - val_acc: 0.5965

Epoch 38/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.2565 - acc: 0.9201 - val_loss: 1.5115 - val_acc: 0.6410

Epoch 39/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.2498 - acc: 0.9205 - val_loss: 1.6900 - val_acc: 0.6300

Epoch 40/50

8000/8000 [==============================] - 3s 387us/step - loss: 0.2228 - acc: 0.9335 - val_loss: 1.6331 - val_acc: 0.6475

Epoch 41/50

8000/8000 [==============================] - 3s 387us/step - loss: 0.2137 - acc: 0.9320 - val_loss: 1.6562 - val_acc: 0.6305

Epoch 42/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.2053 - acc: 0.9399 - val_loss: 1.7376 - val_acc: 0.6190

Epoch 43/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.1885 - acc: 0.9436 - val_loss: 1.8600 - val_acc: 0.6155

Epoch 44/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.1756 - acc: 0.9481 - val_loss: 1.9500 - val_acc: 0.6335

Epoch 45/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.1688 - acc: 0.9496 - val_loss: 2.2368 - val_acc: 0.5805

Epoch 46/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.1582 - acc: 0.9540 - val_loss: 2.0403 - val_acc: 0.6175

Epoch 47/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.1462 - acc: 0.9603 - val_loss: 1.8678 - val_acc: 0.6270

Epoch 48/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.1376 - acc: 0.9624 - val_loss: 2.4479 - val_acc: 0.5640

Epoch 49/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.1304 - acc: 0.9641 - val_loss: 2.5482 - val_acc: 0.5750

Epoch 50/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.1260 - acc: 0.9634 - val_loss: 2.0320 - val_acc: 0.6220

<keras.callbacks.History at 0x7fd2bcb420b8>

在模型訓練中,可以觀察到驗證集的精度是波動的,表明網絡可以進一步改進。預測和測量當前模型的准確性

pred = model.predict(xtest)

pred = np.argmax(pred, axis=1)

pred

array([7, 6, 1, ..., 3, 4, 4])

目前,該模型並不精確,但可以通過架構改進和超參數調整,進一步改進。

 

 

參考鏈接:

https://www.kaggle.com/shivamb/3d-convolutions-understanding-use-case?scriptVersionId=9626233


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM