越簡單越接近本質。
參考資料
U-Net: Convolutional Networks for Biomedical Image Segmentation
Abstract & Introduction
論文中有幾個關鍵詞:
- contracting path 收縮路徑;
- expansive path 擴張路徑;
- precise localization 更精確的位置信息;
- overlap-tile 邊界鏡像翻轉;
- random elastic deformations 隨機彈性形變;
- invariance 尺度不變性;
- touching cells 指距離很近的兩個細胞;
- seamless tilling 無縫拼接;
好了,說完這些關鍵詞我們來看看這篇論文,這篇論文和他的結構一樣簡單易懂,很能說明問題。
首先,作者主要拿自己的網絡和一個基於sliding window的方法做對比,作者先diss了一下這個方法存在以下問題:
Deep neural networks segment neuronal membranes in electron microscopy images (NIPS2012)
- 非常慢,計算冗余(sliding window的毛病大家都懂);
- 在位置精確性和特征提取之間存在一個平衡,因為更多的特征意味着更多的max-pooling,則會丟失掉更多位置信息。
作者的輸入多層特征的思想是受以下論文啟發:
- Hypercolumns for object segmentation and fine-grained localization (2014)
- Image segmentation with cascaded hierarchical models and logistic disjunctive normal networks (2013)
這兩篇論文指出把多層特征(the features from multiple layers)輸入到classifier能夠得到更好的特征提取和更好的位置信息(good localization and the use of context are possible at the same time)。
U-Net和其他網絡的不同之處在於,上采樣(Upsampling)過程中也有很多維特征,讓特征流向更高分辨率的卷積層。
由於網絡使用的卷積是3x3 unpadded convolutions,所以特征圖會縮小,為了讓輸出的圖像和輸入圖像的大小無縫拼接(seamless tilling),則要用到邊界鏡像翻轉(overlap-tile),具體做法如下圖:
Architecture
網絡結構
使用3x3 unpadded convolutions,所以特征圖會不斷縮小,在橫向拼接特征的時候,也要對特征圖進行裁剪,以保持特征圖大小一致。
全部使用ReLU激活函數。
權值初始化使用何愷明的方法:
Surpassing humanlevel performance on imagenet classification
具體做法就是一個標准差滿足sqrt(2/N)
的高斯分布,其中的N代表一個神經元的輸入節點數(例如一個3x3卷積核的輸入是64維的話,那么N=9x64=576)
訓練
在訓練時作者更傾向於更大的圖像輸入,所以干脆將batch_size設置為1,所以在優化器的使用方面,使用到了帶有動量的優化器,並且動量設置的很大(0.99),這么做是為了讓以前的樣本可以決定當前梯度更新的方向(因為batch_size太小啦,可以理解)。
損失函數就是pixel-wise soft-max + cross_entropy了。
數據增強
隨機彈性形變和weight map:
隨機彈性形變就是先用3x3的粗網格初始化隨機形變,然后從標准差為10pixel的高斯分布中初始化隨機位移矢量,再用bicubic雙三次插值來計算每個像素的位移。
隨機彈性形變的目的是讓網絡有invariance(尺度不變性)。
那么weight map是為了強制讓網絡學習touching cells之間的背景,這些位於touching cells之間的背景在損失函數的權重很高,如下圖:
weight map的具體計算方式如下:
代碼
最后來看看代碼吧:https://github.com/milesial/Pytorch-UNet
整體模型:
class UNet(nn.Module):
def __init__(self, n_channels, n_classes):
super(UNet, self).__init__()
self.inc = inconv(n_channels, 64)
self.down1 = down(64, 128)
self.down2 = down(128, 256)
self.down3 = down(256, 512)
self.down4 = down(512, 512)
self.up1 = up(1024, 256)
self.up2 = up(512, 128)
self.up3 = up(256, 64)
self.up4 = up(128, 64)
self.outc = outconv(64, n_classes)
def forward(self, x):
x1 = self.inc(x)
x2 = self.down1(x1)
x3 = self.down2(x2)
x4 = self.down3(x3)
x5 = self.down4(x4)
x = self.up1(x5, x4)
x = self.up2(x, x3)
x = self.up3(x, x2)
x = self.up4(x, x1)
x = self.outc(x)
return F.sigmoid(x)
細節部分:
class double_conv(nn.Module):
'''(conv => BN => ReLU) * 2'''
def __init__(self, in_ch, out_ch):
super(double_conv, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(in_ch, out_ch, 3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True),
nn.Conv2d(out_ch, out_ch, 3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True)
)
def forward(self, x):
x = self.conv(x)
return x
class inconv(nn.Module):
def __init__(self, in_ch, out_ch):
super(inconv, self).__init__()
self.conv = double_conv(in_ch, out_ch)
def forward(self, x):
x = self.conv(x)
return x
class down(nn.Module):
def __init__(self, in_ch, out_ch):
super(down, self).__init__()
self.mpconv = nn.Sequential(
nn.MaxPool2d(2),
double_conv(in_ch, out_ch)
)
def forward(self, x):
x = self.mpconv(x)
return x
class up(nn.Module):
def __init__(self, in_ch, out_ch, bilinear=True):
super(up, self).__init__()
# would be a nice idea if the upsampling could be learned too,
# but my machine do not have enough memory to handle all those weights
if bilinear:
self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
else:
self.up = nn.ConvTranspose2d(in_ch//2, in_ch//2, 2, stride=2)
self.conv = double_conv(in_ch, out_ch)
def forward(self, x1, x2):
x1 = self.up(x1)
# input is CHW
diffY = x2.size()[2] - x1.size()[2]
diffX = x2.size()[3] - x1.size()[3]
x1 = F.pad(x1, (diffX // 2, diffX - diffX//2,
diffY // 2, diffY - diffY//2))
x = torch.cat([x2, x1], dim=1)
x = self.conv(x)
return x
class outconv(nn.Module):
def __init__(self, in_ch, out_ch):
super(outconv, self).__init__()
self.conv = nn.Conv2d(in_ch, out_ch, 1)
def forward(self, x):
x = self.conv(x)
return x
訓練:
optimizer = optim.SGD(net.parameters(),
lr=lr,
momentum=0.9,
weight_decay=0.0005)
criterion = nn.BCELoss()