tensorflow版使用uNet進行醫學圖像分割(Skin數據集)


tensorflow版使用uNet進行醫學圖像分割(Skin數據集)

深度學習、計算機視覺學習筆記、醫學圖像分割、uNet、Skin皮膚數據集


實驗環境

python、tensorflow、keras、jupyter
v100

skin皮膚數據集

在這里插入圖片描述
下載鏈接:鏈接: https://pan.baidu.com/s/1cD8WEB3yjWVvNhqhnJqGPg 提取碼: xgf5


提示:以下是本篇文章正文內容

一、uNet模型

在這里插入圖片描述
         如上圖,U-Net是一個經典的全卷積網絡(即網絡中沒有全連接操作)。網絡的輸入是一張572×572的邊緣經過鏡像操作的圖片,網絡的左側(即紅色大方框內)是由卷積Max Pooling構成的一系列降采樣操作,在原論文中將這一部分叫做壓縮路徑(contraction path),壓縮路徑由4個block組成,每個block使用了2個有效卷積層(每次卷積都會丟失邊界像素)和1個Max Pooling降采樣,每次采樣之后Feature Map的個數都會乘以2,因此有了途中所示的Feature Map尺寸變化,最終得到了尺寸為32×32的Feature Map。
         網絡的右側部分(綠色方框)在原論文中叫做擴展路徑(expansive path),同樣由4個block組成,每個block開始之前通過反卷積將Feature Map的尺寸乘2,同時將其個數減半(最后一層使用1×1卷積將每個64分量的特征向量映射到所需的類別數),然后和左側對稱的壓縮路徑的Feature Map合並拼接,由於左側壓縮路徑和右側擴展路徑的Feature Map的尺寸不一樣,U-Net是通過將壓縮路徑的Feature Map裁剪到和擴展路徑相同尺寸的Feature Map進行歸一化的擴展路徑的卷積操作依舊使用的是有效卷積操作,最終得到的Feature Map的尺寸是388×388,由於該任務是一個二分類任務,所以網絡有兩個輸出Feature Map。


二、實驗過程

1. 加載skin皮膚數據集

from keras.models import Model
from keras.optimizers import Adam
from keras.layers import Conv2D, Input, MaxPooling2D, Dropout, concatenate, UpSampling2D
import numpy as np
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
import os
from keras.preprocessing.image import array_to_img

import matplotlib.pyplot as plt

# 加載訓練集
train = np.load('/shuyc_tmp/models/tensorflow/uNet/skin_dataset/data_train.npy')
# 加載訓練集的mask 
mask = np.load('/shuyc_tmp/models/tensorflow/uNet/skin_dataset/mask_train.npy')
# 加載測試集
test = np.load('/shuyc_tmp/models/tensorflow/uNet/skin_dataset/data_test.npy')
# 歸一化處理
train = train.astype('float32')
train = train/255.
mask = mask /mask.max()

test = test.reshape((test.shape[0],256,256,3))
test = test.astype('float32')
test = test/255.
train = train.reshape(train.shape[0], 256, 256, 3)
mask = mask.reshape(mask.shape[0], 256, 256, 1)

##train = train[0]

2. 定義uNet模型

# 定義u-Net網絡模型
def Unet():
    # contraction path
    # 輸入層數據為256*256的三通道圖像
    inputs = Input(shape=[256, 256, 3])
    # 第一個block(含兩個激活函數為relu的有效卷積層 ,和一個卷積最大池化(下采樣)操作)
    conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
    conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv1)
    # 最大池化
    pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

    # 第二個block(含兩個激活函數為relu的有效卷積層 ,和一個卷積最大池化(下采樣)操作)
    conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
    conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
    pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)

    # 第三個block(含兩個激活函數為relu的有效卷積層 ,和一個卷積最大池化(下采樣)操作)
    conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
    conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
    pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)

    # 第四個block(含兩個激活函數為relu的有效卷積層 ,和一個卷積最大池化(下采樣)操作)
    conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
    conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
    # 將部分隱藏層神經元丟棄,防止過於細化而引起的過擬合情況
    drop4 = Dropout(0.5)(conv4)
    pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)

    conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
    conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
    # 將部分隱藏層神經元丟棄,防止過於細化而引起的過擬合情況
    drop5 = Dropout(0.5)(conv5)

    # expansive path
    # 上采樣
    up6 = Conv2D(512, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
        UpSampling2D(size=(2, 2))(drop5))
    # copy and crop(和contraction path 的feature map合並拼接)
    merge6 = concatenate([drop4, up6], axis=3)
    # 兩個有效卷積層
    conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge6)
    conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)
    
    # 上采樣
    up7 = Conv2D(256, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
        UpSampling2D(size=(2, 2))(conv6))
    merge7 = concatenate([conv3, up7], axis=3)
    conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7)
    conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv7)

    # 上采樣
    up8 = Conv2D(128, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
        UpSampling2D(size=(2, 2))(conv7))
    merge8 = concatenate([conv2, up8], axis=3)
    conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8)
    conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv8)

    # 上采樣
    up9 = Conv2D(64, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
        UpSampling2D(size=(2, 2))(conv8))
    merge9 = concatenate([conv1, up9], axis=3)
    conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9)
    conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
    conv9 = Conv2D(2, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
    conv10 = Conv2D(1, 1, activation='sigmoid')(conv9)
    model = Model(inputs=inputs, outputs=conv10)
    
    # 優化器為 Adam,損失函數為 binary_crossentropy,評價函數為 accuracy
    model.compile(optimizer=Adam(lr=1e-4),
                 loss='binary_crossentropy',
                 metrics=['accuracy'])
    return model

3. 訓練

# 開始訓練
unet = Unet()
# 每個epoch后保存模型到 uNet_Skin.hdf5
model_checkpoint = ModelCheckpoint('./uNet_Skin.hdf5',monitor='loss',verbose=1,save_best_only=True)
# 訓練
history = unet.fit(train, mask, batch_size=4, epochs=30, verbose=1, 
                   validation_split=0.2, shuffle=True, callbacks=[model_checkpoint])

在這里插入圖片描述
展示accuracy隨訓練的變化圖

# 展示一下精確度隨訓練的變化圖
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['accuracy', 'val_accuracy'], loc='upper left')
plt.show()

在這里插入圖片描述
展示loss隨訓練的變化圖

# 展示一下loss隨訓練的變化圖
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['loss', 'val_loss'], loc='upper left')
plt.show()

在這里插入圖片描述

4. 預測

# 預測
predict_imgs = unet.predict(test, batch_size=1, verbose=1)

# 圖像還原
predict_imgs = predict_imgs * 255
# print(predict_imgs[0])

# 將像素限制在[0, 255]之間
predict_imgs = np.clip(predict_imgs,0,255)
# print(predict_imgs[0])

# 保存為predict.npy文件
if not os.path.exists('./results'):
    os.makedirs('./results')
np.save('./results/predict.npy',predict_imgs)

# 保存為圖像文件
def save_img():
    imgs = np.load('./results/predict.npy')
    for i in range(imgs.shape[0]):
        img = imgs[i]
        img = array_to_img(img)
        img.save("./data_out/%d.jpg" % (i))

save_img()

5. 結果可視化

比較測試圖像的 ground_truth_mask 和預測的 mask

import matplotlib.pyplot as plt
import numpy as np
import cv2 as cv

plt.figure(num=1, figsize=(10, 6))

for i in range(3):
    for j in range(5):
        if (i == 0):
            img_data = cv.imread('./SKIN_data/test/images/' + str(j) + '.jpg')
            b, g, r = cv.split(img_data)
            img_data = cv.merge([r, g, b])
            plt.subplot(3, 5, (i * 5) + (j + 1)), plt.imshow(img_data)
            plt.xticks(())
            plt.yticks(())
            plt.title(str(j) + '.jpg')
            if (j == 0):
                plt.ylabel('test_img')
        if (i == 1):
            ground_truth = cv.imread('./SKIN_data/test/labels/' + str(j) + '.jpg')
            b, g, r = cv.split(ground_truth)
            ground_truth = cv.merge([r, g, b])
            plt.subplot(3, 5, (i * 5) + (j + 1)), plt.imshow(ground_truth)
            plt.xticks(())
            plt.yticks(())
            if (j == 0):
                plt.ylabel('ground_truth_mask')
        if (i == 2):
            predict_img = cv.imread('./data_out/' + str(j) + '.jpg')
            b, g, r = cv.split(predict_img)
            predict_img = cv.merge([r, g, b])
            plt.subplot(3, 5, (i * 5) + (j + 1)), plt.imshow(predict_img)
            plt.xticks(())
            plt.yticks(())
            if (j == 0):
                plt.ylabel('predicted')

plt.show()

結果如下:
在這里插入圖片描述


三、總結

還可以使用如IoU、mIoU等評測方法對分割的結果進行測評

附源碼:uNet


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM