U-net網絡實現醫學圖像分割以及遙感圖像分割源代碼


U-net網絡主要思路是源於FCN,采用全卷積網絡,對圖像進行逐像素分類,能在圖像分割領域達到不錯的效果。

因其網絡結構類似於U型,所以以此命名,可以由其架構清晰的看出,其構成是由左端的卷積壓縮層,以及右端的轉置卷積放大層組成;

左右兩端之間還有聯系,通過灰色箭頭所指,右端在進行轉置卷積操作的時候,會拼接左端前幾次卷積后的結果,這樣可以保證得到

更多的信息。

在網絡的末端得到兩張feature map之后還需要通過softmax函數得到概率圖,整個網絡輸出的是類別數量的特征圖,最后得到的是類別的概率圖,針對每個像素點給出其屬於哪個類別的概率。

Unet網絡的結構提出主要是為了使用在醫學圖像分割領域,它與fcn相比,有以下有優點:1.多尺度  2.能適應超大圖像的

輸入。

unet的卷積過程,是從高分辨率(淺層特征)到低分辨率(深層特征)的過程。

unet的特點就是通過反卷積過程中的拼接,使得淺層特征和深層特征結合起來。對於醫學圖像來說,unet能用深層特征用於定位,淺層特征用於精確分割,所以unet常見於很多圖像分割任務。

 

 

其在Keras實現的部分代碼解析如下:

  1 from model import *
  2 from data import *#導入這兩個文件中的所有函數
  3  
  4 #os.environ[“CUDA_VISIBLE_DEVICES”] = “0”
  5  
  6  
  7 data_gen_args = dict(rotation_range=0.2,
  8                     width_shift_range=0.05,
  9                     height_shift_range=0.05,
 10                     shear_range=0.05,
 11                     zoom_range=0.05,
 12                     horizontal_flip=True,
 13                     fill_mode=‘nearest’)#數據增強時的變換方式的字典
 14 myGene = trainGenerator(2,‘data/membrane/train’,‘image’,‘label’,data_gen_args,save_to_dir = None)
 15 #得到一個生成器,以batch=2的速率無限生成增強后的數據
 16  
 17 model = unet()
 18 model_checkpoint = ModelCheckpoint(‘unet_membrane.hdf5’, monitor=‘loss’,verbose=1, save_best_only=True)
 19 #回調函數,第一個是保存模型路徑,第二個是檢測的值,檢測Loss是使它最小,第三個是只保存在驗證集上性能最好的模型
 20  
 21 model.fit_generator(myGene,steps_per_epoch=300,epochs=1,callbacks=[model_checkpoint])
 22 #steps_per_epoch指的是每個epoch有多少個batch_size,也就是訓練集總樣本數除以batch_size的值
 23 #上面一行是利用生成器進行batch_size數量的訓練,樣本和標簽通過myGene傳入
 24 testGene = testGenerator(“data/membrane/test”)
 25 results = model.predict_generator(testGene,30,verbose=1)
 26 #30是step,steps: 在停止之前,來自 generator 的總步數 (樣本批次)。 可選參數 Sequence:如果未指定,將使用len(generator) 作為步數。
 27 #上面的返回值是:預測值的 Numpy 數組。
 28 saveResult(“data/membrane/test”,results)#保存結果
 29 1
 30 data.py文件:
 31 
 32 from __future__ import print_function
 33 from keras.preprocessing.image import ImageDataGenerator
 34 import numpy as np 
 35 import os
 36 import glob
 37 import skimage.io as io
 38 import skimage.transform as trans
 39  
 40 Sky = [128,128,128]
 41 Building = [128,0,0]
 42 Pole = [192,192,128]
 43 Road = [128,64,128]
 44 Pavement = [60,40,222]
 45 Tree = [128,128,0]
 46 SignSymbol = [192,128,128]
 47 Fence = [64,64,128]
 48 Car = [64,0,128]
 49 Pedestrian = [64,64,0]
 50 Bicyclist = [0,128,192]
 51 Unlabelled = [0,0,0]
 52  
 53 COLOR_DICT = np.array([Sky, Building, Pole, Road, Pavement,
 54                           Tree, SignSymbol, Fence, Car, Pedestrian, Bicyclist, Unlabelled])
 55  
 56  
 57 def adjustData(img,mask,flag_multi_class,num_class):
 58     if(flag_multi_class):#此程序中不是多類情況,所以不考慮這個
 59         img = img / 255
 60         mask = mask[:,:,:,0] if(len(mask.shape) == 4) else mask[:,:,0]
 61 #if else的簡潔寫法,一行表達式,為真時放在前面,不明白mask.shape=4的情況是什么,由於有batch_size,所以mask就有3維[batch_size,wigth,heigh],估計mask[:,:,0]是寫錯了,應該寫成[0,:,:],這樣可以得到一片圖片,
 62         new_mask = np.zeros(mask.shape + (num_class,))
 63 #np.zeros里面是shape元組,此目的是將數據厚度擴展到num_class層,以在層的方向實現one-hot結構
 64  
 65         for i in range(num_class):
 66             #for one pixel in the image, find the class in mask and convert it into one-hot vector
 67             #index = np.where(mask == i)
 68             #index_mask = (index[0],index[1],index[2],np.zeros(len(index[0]),dtype = np.int64) + i) if (len(mask.shape) == 4) else (index[0],index[1],np.zeros(len(index[0]),dtype = np.int64) + i)
 69             #new_mask[index_mask] = 1
 70             new_mask[mask == i,i] = 1#將平面的mask的每類,都單獨變成一層,
 71         new_mask = np.reshape(new_mask,(new_mask.shape[0],new_mask.shape[1]*new_mask.shape[2],new_mask.shape[3])) if flag_multi_class else np.reshape(new_mask,(new_mask.shape[0]*new_mask.shape[1],new_mask.shape[2]))
 72         mask = new_mask
 73     elif(np.max(img) > 1):
 74         img = img / 255
 75         mask = mask /255
 76         mask[mask > 0.5] = 1
 77         mask[mask <= 0.5] = 0
 78     return (img,mask)
 79 #上面這個函數主要是對訓練集的數據和標簽的像素值進行歸一化
 80  
 81  
 82 def trainGenerator(batch_size,train_path,image_folder,mask_folder,aug_dict,image_color_mode = "grayscale",
 83                     mask_color_mode = "grayscale",image_save_prefix  = "image",mask_save_prefix  = "mask",
 84                     flag_multi_class = False,num_class = 2,save_to_dir = None,target_size = (256,256),seed = 1):
 85     '''
 86     can generate image and mask at the same time
 87     use the same seed for image_datagen and mask_datagen to ensure the transformation for image and mask is the same
 88     if you want to visualize the results of generator, set save_to_dir = "your path"
 89     '''
 90     image_datagen = ImageDataGenerator(**aug_dict)
 91     mask_datagen = ImageDataGenerator(**aug_dict)
 92     image_generator = image_datagen.flow_from_directory(#https://blog.csdn.net/nima1994/article/details/80626239
 93         train_path,#訓練數據文件夾路徑
 94         classes = [image_folder],#類別文件夾,對哪一個類進行增強
 95         class_mode = None,#不返回標簽
 96         color_mode = image_color_mode,#灰度,單通道模式
 97         target_size = target_size,#轉換后的目標圖片大小
 98         batch_size = batch_size,#每次產生的(進行轉換的)圖片張數
 99         save_to_dir = save_to_dir,#保存的圖片路徑
100         save_prefix  = image_save_prefix,#生成圖片的前綴,僅當提供save_to_dir時有效
101         seed = seed)
102     mask_generator = mask_datagen.flow_from_directory(
103         train_path,
104         classes = [mask_folder],
105         class_mode = None,
106         color_mode = mask_color_mode,
107         target_size = target_size,
108         batch_size = batch_size,
109         save_to_dir = save_to_dir,
110         save_prefix  = mask_save_prefix,
111         seed = seed)
112     train_generator = zip(image_generator, mask_generator)#組合成一個生成器
113     for (img,mask) in train_generator:
114 #由於batch是2,所以一次返回兩張,即img是一個2張灰度圖片的數組,[2,256,256]
115         img,mask = adjustData(img,mask,flag_multi_class,num_class)#返回的img依舊是[2,256,256]
116         yield (img,mask)
117 #每次分別產出兩張圖片和標簽,不懂yield的請看https://blog.csdn.net/mieleizhi0522/article/details/82142856
118  # yield相當於生成器
119 #上面這個函數主要是產生一個數據增強的圖片生成器,方便后面使用這個生成器不斷生成圖片
120  
121  
122 def testGenerator(test_path,num_image = 30,target_size = (256,256),flag_multi_class = False,as_gray = True):
123     for i in range(num_image):
124         img = io.imread(os.path.join(test_path,"%d.png"%i),as_gray = as_gray)
125         img = img / 255
126         img = trans.resize(img,target_size)
127         img = np.reshape(img,img.shape+(1,)) if (not flag_multi_class) else img
128         img = np.reshape(img,(1,)+img.shape)
129 #將測試圖片擴展一個維度,與訓練時的輸入[2,256,256]保持一致
130         yield img
131  
132 #上面這個函數主要是對測試圖片進行規范,使其尺寸和維度上和訓練圖片保持一致
133  
134 def geneTrainNpy(image_path,mask_path,flag_multi_class = False,num_class = 2,image_prefix = "image",mask_prefix = "mask",image_as_gray = True,mask_as_gray = True):
135     image_name_arr = glob.glob(os.path.join(image_path,"%s*.png"%image_prefix))
136 #相當於文件搜索,搜索某路徑下與字符匹配的文件https://blog.csdn.net/u010472607/article/details/76857493/
137     image_arr = []
138     mask_arr = []
139     for index,item in enumerate(image_name_arr):#enumerate是枚舉,輸出[(0,item0),(1,item1),(2,item2)]
140         img = io.imread(item,as_gray = image_as_gray)
141         img = np.reshape(img,img.shape + (1,)) if image_as_gray else img
142         mask = io.imread(item.replace(image_path,mask_path).replace(image_prefix,mask_prefix),as_gray = mask_as_gray)
143 #重新在mask_path文件夾下搜索帶有mask字符的圖片(標簽圖片)
144         mask = np.reshape(mask,mask.shape + (1,)) if mask_as_gray else mask
145         img,mask = adjustData(img,mask,flag_multi_class,num_class)
146         image_arr.append(img)
147         mask_arr.append(mask)
148     image_arr = np.array(image_arr)
149     mask_arr = np.array(mask_arr)#轉換成array
150     return image_arr,mask_arr
151 #該函數主要是分別在訓練集文件夾下和標簽文件夾下搜索圖片,然后擴展一個維度后以array的形式返回,是為了在沒用數據增強時的讀取文件夾內自帶的數據
152  
153  
154 def labelVisualize(num_class,color_dict,img):
155     img = img[:,:,0] if len(img.shape) == 3 else img
156     img_out = np.zeros(img.shape + (3,))
157 #變成RGB空間,因為其他顏色只能再RGB空間才會顯示
158     for i in range(num_class):
159         img_out[img == i,:] = color_dict[i]
160 #為不同類別塗上不同的顏色,color_dict[i]是與類別數有關的顏色,img_out[img == i,:]是img_out在img中等於i類的位置上的點
161     return img_out / 255
162  
163 #上面函數是給出測試后的輸出之后,為輸出塗上不同的顏色,多類情況下才起作用,兩類的話無用
164  
165 def saveResult(save_path,npyfile,flag_multi_class = False,num_class = 2):
166     for i,item in enumerate(npyfile):
167         img = labelVisualize(num_class,COLOR_DICT,item) if flag_multi_class else item[:,:,0]
168 #多類的話就圖成彩色,非多類(兩類)的話就是黑白色
169         io.imsave(os.path.join(save_path,"%d_predict.png"%i),img)19
194 #下面是model.py:
195 
196  
197 
198 import numpy as np 
199 import os
200 import skimage.io as io
201 import skimage.transform as trans
202 import numpy as np
203 from keras.models import *
204 from keras.layers import *
205 from keras.optimizers import *
206 from keras.callbacks import ModelCheckpoint, LearningRateScheduler
207 from keras import backend as keras
208  
209  
210 def unet(pretrained_weights = None,input_size = (256,256,1)):
211     inputs = Input(input_size)
212     conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
213     conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
214     pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
215     conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
216     conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
217     pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
218     conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
219     conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
220     pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
221     conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
222     conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
223     drop4 = Dropout(0.5)(conv4)
224     pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
225  
226     conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
227     conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
228     drop5 = Dropout(0.5)(conv5)
229  
230     up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))#上采樣之后再進行卷積,相當於轉置卷積操作!
231     merge6 = concatenate([drop4,up6],axis=3)
232     conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
233     conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
234  
235     up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
236     merge7 = concatenate([conv3,up7],axis = 3)
237     conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
238     conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
239  
240     up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
241     merge8 = concatenate([conv2,up8],axis = 3)
242     conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
243     conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)
244  
245     up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
246     merge9 = concatenate([conv1,up9],axis = 3)
247     conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
248     conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
249     conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
250     conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)#我懷疑這個sigmoid激活函數是多余的,因為在后面的loss中用到的就是二進制交叉熵,包含了sigmoid
251  
252     model = Model(input = inputs, output = conv10)
253  
254     model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy'])#模型執行之前必須要編譯https://keras-cn.readthedocs.io/en/latest/getting_started/sequential_model/
255     #利用二進制交叉熵,也就是sigmoid交叉熵,metrics一般選用准確率,它會使准確率往高處發展
256     #model.summary()
257  
258     if(pretrained_weights):
259      model.load_weights(pretrained_weights)
260  
261     return model
262  
263  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM