DETR 模型結構源碼


DETR 模型結構源碼

End-to-End Object Detection with Transformers(DETR)

論文地址:https://arxiv.org/abs/2005.12872

源代碼位置: https://github.com/facebookresearch/detr

參考文獻: https://www.cnblogs.com/Glucklichste/p/14057005.html

模型整體結構

img

論文中模型結構

主干網絡

  • backbone(CNN-Resnet)
    • CNN網絡
    • positional(位置信息)
  • transformer
    • encoder
    • decoder
  • predicttion head

模型構建

models/detr.py
#  構建兩大模型
#  backbone = build_backbone(args)
#  transformer = build_transformer(args)
#  模型連接  DETR
#

def build(args):

    num_classes = 20 if args.dataset_file != 'coco' else 91
    if args.dataset_file == "coco_panoptic":
        # for panoptic, we just add a num_classes that is large enough to hold
        # max_obj_id + 1, but the exact value doesn't really matter
        num_classes = 250
    device = torch.device(args.device)
	
    # 包含兩大部分, 構建 backbone 和 構建 transformer 
    backbone = build_backbone(args)
    transformer = build_transformer(args)

    model = DETR(
        backbone,
        transformer,
        num_classes=num_classes,
        num_queries=args.num_queries,
        aux_loss=args.aux_loss,
    )
    if args.masks:
        model = DETRsegm(model, freeze_detr=(args.frozen_weights is not None))

backbone

cnn骨架特征提取
backbone的輸入和輸出

  • input shape=(N,3,W,H)
  • output shape=(N,2048,W/32,H/32) #針對 Resnet50 C=2048, 針對 Resnet18,Resnet34 C=512

假設輸入是(N,C,H,W),則resnet50輸出是(N,2048,H//32,W//32),1024比較大,
為了節省計算量,先采用1x1卷積降維為256,(hidden_dim=256,在main.py 中設置參數)
最后轉化為序列格式輸入到transformer中,輸入shape=(H*W,N,256),H=H/32,W=W/32

class Backbone(BackboneBase):
    """ResNet backbone with frozen BatchNorm."""
    def __init__(self, name: str,
                 train_backbone: bool,
                 return_interm_layers: bool,
                 dilation: bool):
        backbone = getattr(torchvision.models, name)(
            replace_stride_with_dilation=[False, False, dilation],
            pretrained=is_main_process(), norm_layer=FrozenBatchNorm2d)

        # 針對不同的網絡,選擇了不同的輸出大小
        num_channels = 512 if name in ('resnet18', 'resnet34') else 2048
        super().__init__(backbone, train_backbone, num_channels, return_interm_layers)

···


```python
在 DETR 類中
src 為 backone 的輸出 shape=(N,512,W/32,H/32)
# self.input_proj(src) 將 shape=(N,512,W/32,H/32) -> shape=(N,256,W/32,H/32)

hs = self.transformer(self.input_proj(src), mask, self.query_embed.weight, pos[-1])[0]

位置信息標注,包含了x,y兩個方向的位置信息。編碼方式任然采用sincos, 語音序列只是包含了一個方向的位置信息
PositionEmbeddingSine.forward的輸入和輸出

  • input NestedTensor型數據 tensor_list的類型是NestedTensor,內部自動附加了mask,
  • x.tensors.shape=((N, 512,W/32, H/32) x.mask.shape=(N,W/32,H/32)
  • output: pos.shape=(N, 256, W/32,H/32)

class PositionEmbeddingSine(nn.Module):
    """
    This is a more standard version of the position embedding, very similar to the one
    used by the Attention is all you need paper, generalized to work on images.
    """
    def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
        super().__init__()
        self.num_pos_feats = num_pos_feats
        self.temperature = temperature
        self.normalize = normalize
        if scale is not None and normalize is False:
            raise ValueError("normalize should be True if scale is passed")
        if scale is None:
            scale = 2 * math.pi
        self.scale = scale

    def forward(self, tensor_list: NestedTensor):

        x = tensor_list.tensors
        mask = tensor_list.mask
        #x.tensors.shape=((N, 512,W/32, H/32)   x.mask.shape=(N,W/32,H/32)

        assert mask is not None
        not_mask = ~mask
        y_embed = not_mask.cumsum(1, dtype=torch.float32)
        x_embed = not_mask.cumsum(2, dtype=torch.float32)
        if self.normalize:
            eps = 1e-6
            y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
            x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
        
        # 前面輸入向量是256,編碼是一半sin,一半cos
        dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
        dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)

        pos_x = x_embed[:, :, :, None] / dim_t
        pos_y = y_embed[:, :, :, None] / dim_t
        pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3)
        pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3)
        pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)

        # pos.shape=(N, 256, W/32,H/32)  前128是y方向編碼,而128是x方向編碼

        return pos


transformer

img

transformer整體構建

model/transformer.py
Transformer  模型構建
包含 encoder   decoder

class Transformer(nn.Module):

    def __init__(self, d_model=512, nhead=8, num_encoder_layers=6,
                 num_decoder_layers=6, dim_feedforward=2048, dropout=0.1,
                 activation="relu", normalize_before=False,
                 return_intermediate_dec=False):
        super().__init__()

        # 編碼
        encoder_layer = TransformerEncoderLayer(d_model, nhead, dim_feedforward,
                                                dropout, activation, normalize_before)
        encoder_norm = nn.LayerNorm(d_model) if normalize_before else None
        self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm)

        # 解碼
        decoder_layer = TransformerDecoderLayer(d_model, nhead, dim_feedforward,
                                                dropout, activation, normalize_before)
        decoder_norm = nn.LayerNorm(d_model)
        self.decoder = TransformerDecoder(decoder_layer, num_decoder_layers, decoder_norm,
                                          return_intermediate=return_intermediate_dec)

        self._reset_parameters()

        self.d_model = d_model
        self.nhead = nhead

    def _reset_parameters(self):
        for p in self.parameters():
            if p.dim() > 1:
                nn.init.xavier_uniform_(p)

    def forward(self, src, mask, query_embed, pos_embed):
        # flatten NxCxHxW to HWxNxC
		# inputs:  {src,mask,query_embed,pos} 由 DETR.forward 獲取來自 backbone
        
        bs, c, h, w = src.shape
		
        # 先對數據做變換
        # 特殊說明 這里是經過backbone 輸出的特征 (N,256,W/32,H/32) 之后transformer過程中 輸出shape為(H/32xW/32,N,256) 特征的寬和高沒有變化,為了書寫方法方便,我這里將 W/32,H/32 寫成為 W,H 
        # src=(N,256,W/32,H/32)-> (WH,N,256)
        # pos_embed=(N,256,W,H)-> (WH,N,256)
        # query_embed=(100,256) -> (100,N,256)
        # mask=(N,W,H) -> (N,WH)
        src = src.flatten(2).permute(2, 0, 1)
        pos_embed = pos_embed.flatten(2).permute(2, 0, 1)
        query_embed = query_embed.unsqueeze(1).repeat(1, bs, 1)
        mask = mask.flatten(1)
		
        # 解碼  第一層 首次參數設置為0,后續自動更新
        tgt = torch.zeros_like(query_embed)

        # encoder  src=(WH,N,256) mask= (N,WH)  pos_embed= (WH,N,256)
        # 輸出 (WH,N,256)
        memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed)
       

        # decoder tgt=(100,N,256) memory=(WH,N,256),mask=(N,WH) 
        # pos_embed=(WH,N,256) query_embed=(100,N,256)
        # 輸出 hs=(decoder_layers, 100, N, 256)
        hs = self.decoder(tgt, memory, memory_key_padding_mask=mask,
                          pos=pos_embed, query_pos=query_embed)

        
        # return (decoder_layers, N, 100, 256) (N, 256, H, W])
        return hs.transpose(1, 2), memory.permute(1, 2, 0).view(bs, c, h, w)

Encoder

編碼器結構和輸入輸出
編碼器的輸入有三個 src=(WH,N,256) src_mask= (N,WH) pos_embed= (WH,N,256) 注釋:W=W/32,H=H/32

  • 由圖像生成的序列,shape=(WH,N,256)
  • 掩碼信息,shape= (N,WH)
  • 圖像序列的空間位置信息,shape=(WH,N,256)

經過6層編碼后 輸出只有一個 序列,shape和輸入的src 序列保持一直,shape=(WH,N,256) 注釋:W=W/32,H=H/32

模型細節

  • 原始transformer的n個編碼器輸入中,只有第一個編碼器需要輸入位置編碼向量,但是DETR里面對每個編碼器都輸入了同一個位置編碼向量
  • QKV處理邏輯不同,在編碼器內部位置編碼僅僅和 Q K 相加,V 不做任何處理

TransformerEncoder類



def _get_clones(module, N):
    return nn.ModuleList([copy.deepcopy(module) for i in range(N)])

class TransformerEncoder(nn.Module):

    def __init__(self, encoder_layer, num_layers, norm=None):
        super().__init__()
        self.layers = _get_clones(encoder_layer, num_layers)
        self.num_layers = num_layers
        self.norm = norm

    def forward(self, src,
                mask: Optional[Tensor] = None,
                src_key_padding_mask: Optional[Tensor] = None,
                pos: Optional[Tensor] = None):
        output = src
        
        #  默認設置了 6個 編碼器,循環6遍
        #  encoder input  src=(WH,N,256) src_mask= (N,WH)  pos_embed= (WH,N,256)
        #  output -> output  (WH,N,256)
        # 包含了多層相同的結構,首尾相連,上一層輸出為下一層的輸入
        
        for layer in self.layers:
            output = layer(output, src_mask=mask,
                           src_key_padding_mask=src_key_padding_mask, pos=pos)
        if self.norm is not None:
            output = self.norm(output)

        return output


TransformerEncoderLayer類


class TransformerEncoderLayer(nn.Module):

    def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
                 activation="relu", normalize_before=False):
        super().__init__()
        self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
        # Implementation of Feedforward model
        self.linear1 = nn.Linear(d_model, dim_feedforward)
        self.dropout = nn.Dropout(dropout)
        self.linear2 = nn.Linear(dim_feedforward, d_model)

        self.norm1 = nn.LayerNorm(d_model)
        self.norm2 = nn.LayerNorm(d_model)
        self.dropout1 = nn.Dropout(dropout)
        self.dropout2 = nn.Dropout(dropout)

        self.activation = _get_activation_fn(activation)
        self.normalize_before = normalize_before

    def with_pos_embed(self, tensor, pos: Optional[Tensor]):
        return tensor if pos is None else tensor + pos

    def forward_post(self,
                     src,
                     src_mask: Optional[Tensor] = None,
                     src_key_padding_mask: Optional[Tensor] = None,
                     pos: Optional[Tensor] = None):
        
        # src=(WH,N,256) mask= (N,WH)  pos_embed= (WH,N,256)
        #  with_pos_embed  輸入是 src  pos  {圖片序列,位置信息}
        # 對 Q K 進行更新
        q = k = self.with_pos_embed(src, pos)

        # MultiheadAttention 多頭注意力機制
        # 在編碼器內部位置編碼僅僅和QK相加,V不做任何處理

        src2 = self.self_attn(q, k, value=src, attn_mask=src_mask,
                              key_padding_mask=src_key_padding_mask)[0]
        # 殘差
        src = src + self.dropout1(src2)
        src = self.norm1(src)
        # FFN
        src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
        src = src + self.dropout2(src2)
        src = self.norm2(src)
        return src

    def forward_pre(self, src,
                    src_mask: Optional[Tensor] = None,
                    src_key_padding_mask: Optional[Tensor] = None,
                    pos: Optional[Tensor] = None):
        src2 = self.norm1(src)
        q = k = self.with_pos_embed(src2, pos)
        src2 = self.self_attn(q, k, value=src2, attn_mask=src_mask,
                              key_padding_mask=src_key_padding_mask)[0]

        src = src + self.dropout1(src2)
        src2 = self.norm2(src)
        src2 = self.linear2(self.dropout(self.activation(self.linear1(src2))))
        src = src + self.dropout2(src2)

        return src

    def forward(self, src,
                src_mask: Optional[Tensor] = None,
                src_key_padding_mask: Optional[Tensor] = None,
                pos: Optional[Tensor] = None):
        # encoder  src=(WH,N,256) mask= (N,WH)  pos_embed= (WH,N,256)
        # output=(WH,N,256)
        #  默認 normalize_before=False 只對 forward_post 函數注解
        if self.normalize_before:
            return self.forward_pre(src, src_mask, src_key_padding_mask, pos)

        return self.forward_post(src, src_mask, src_key_padding_mask, pos)


Decoder

解碼器結構和輸入輸出

輸入參數
解碼器的輸入 有五個參數 decoder tgt=(100,N,256) memory=(WH,N,256),mask=(N,WH) pos_embed=(WH,N,256) query_pos=(100,N,256)

  • tgt 可以理解為上一層解碼器的解碼輸出 shape=(100,N,256) 第一層的tgt=torch.zeros_like(query_embed) 為零矩陣
  • memory 最后一個編碼器輸出 shape=(WH,N,256)
  • mask 掩碼信息 shape=(N,WH)
  • pos 和編碼器輸入中完全相同位置參數 shape=(WH,N,256)
  • query_pos 是可學習輸出位置向量, 個人理解 解碼器中的這個參數 全局共享 提供全局注意力 query_pos=(100,N,256)

輸出參數

  • 輸出 (decoder_layers, 100, N, 256) decoder_layers 為解碼器的數量(層數),原文默認設置為6層

原始transformer順序解碼操作不同的是,detr一次就把N個無序框並行輸出

Obeject Query
針對 query_pos 參數的其他博客解釋
論文中指出object queries作用非常類似faster rcnn中的anchor,只不過這里是可學習的,不是提前設置好的。
object queries(shape是(100,256)) 源代碼中,這是一個torch.nn.Embedding的對象。
官方介紹:一個保存了固定字典和大小的簡單查找表。這個模塊常用來保存詞嵌入和用下標檢索它們。模塊的輸入是一個下標的列表,輸出是對應的詞嵌入。

個人理解:query_pos 可以簡單認為是輸出位置編碼,其作用主要是在學習過程中提供目標對象和全局圖像之間的關系,相當於全局注意力,必不可少非常關鍵。代碼形式上是可學習位置編碼矩陣。和編碼器一樣,該可學習位置編碼向量也會輸入到每一個解碼器中。我們可以嘗試通俗理解:object queries矩陣內部通過學習建模了100個物體之間的全局關系,並且參與到網絡的學習當中。

其他細節:

  • tgt(第一次輸入是query embeding,第二次是上一層的輸出out);
  • 和編碼器一樣,只是Q 與 K加上了位置編碼信息, V不會加入位置編碼
  • 引入可學習的Object queries
  • 不需要順序解碼,一次即可輸出N個無序集合

TransformerDecoder類

class TransformerDecoder(nn.Module):

    def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False):
        super().__init__()
        self.layers = _get_clones(decoder_layer, num_layers)
        self.num_layers = num_layers
        self.norm = norm
        self.return_intermediate = return_intermediate

    def forward(self, tgt, memory,
                tgt_mask: Optional[Tensor] = None,
                memory_mask: Optional[Tensor] = None,
                tgt_key_padding_mask: Optional[Tensor] = None,
                memory_key_padding_mask: Optional[Tensor] = None,
                pos: Optional[Tensor] = None,
                query_pos: Optional[Tensor] = None):
        # decoder tgt=(100,N,256) memory=(WH,N,256),mask=(N,WH) pos_embed=(WH,N,256) query_embed=(100,N,256)
        output = tgt
        intermediate = []

        for layer in self.layers:

            output = layer(output, memory, tgt_mask=tgt_mask,
                           memory_mask=memory_mask,
                           tgt_key_padding_mask=tgt_key_padding_mask,
                           memory_key_padding_mask=memory_key_padding_mask,
                           pos=pos, query_pos=query_pos)
            if self.return_intermediate:
                intermediate.append(self.norm(output))

        if self.norm is not None:
            output = self.norm(output)
            if self.return_intermediate:
                intermediate.pop()
                intermediate.append(output)

        # intermediate=[outpout...]     intermediate[0].shape=(100,N,256)
        # return_intermediate = True
        if self.return_intermediate:
            return torch.stack(intermediate)

        return output.unsqueeze(0)

TransformerDecoderLayer類

class TransformerDecoderLayer(nn.Module):

    def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
                 activation="relu", normalize_before=False):
        super().__init__()
        self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
        self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
        # Implementation of Feedforward model
        self.linear1 = nn.Linear(d_model, dim_feedforward)
        self.dropout = nn.Dropout(dropout)
        self.linear2 = nn.Linear(dim_feedforward, d_model)

        self.norm1 = nn.LayerNorm(d_model)
        self.norm2 = nn.LayerNorm(d_model)
        self.norm3 = nn.LayerNorm(d_model)
        self.dropout1 = nn.Dropout(dropout)
        self.dropout2 = nn.Dropout(dropout)
        self.dropout3 = nn.Dropout(dropout)

        self.activation = _get_activation_fn(activation)
        self.normalize_before = normalize_before

    def with_pos_embed(self, tensor, pos: Optional[Tensor]):
        return tensor if pos is None else tensor + pos

    def forward_post(self, tgt, memory,
                     tgt_mask: Optional[Tensor] = None,
                     memory_mask: Optional[Tensor] = None,
                     tgt_key_padding_mask: Optional[Tensor] = None,
                     memory_key_padding_mask: Optional[Tensor] = None,
                     pos: Optional[Tensor] = None,
                     query_pos: Optional[Tensor] = None):

        #  # decoder tgt=(100,N,256) memory=(WH,N,256),mask=(N,WH) pos_embed=(WH,N,256) query_embed=(100,N,256)
        # 解碼 第一次注意力機制  tgt=(100,N,256)  是 上一個單元輸出 如果是第一次 torch.zeros_like(query_embed)
        # query_embed=(100,N,256)  query_pos 應該是共享單元,不管多少層都是公用一組數據


        q = k = self.with_pos_embed(tgt, query_pos)
        tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask,
                              key_padding_mask=tgt_key_padding_mask)[0]
        tgt = tgt + self.dropout1(tgt2)
        tgt = self.norm1(tgt)

        # multihead_attn
        # query=self.with_pos_embed(tgt, query_pos)   在第二次注意力機制中 對 Q 進行更新
        # key=self.with_pos_embed(memory, pos)     在第二次注意力機制中對 K 進行更新

        tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos),
                                   key=self.with_pos_embed(memory, pos),
                                   value=memory, attn_mask=memory_mask,
                                   key_padding_mask=memory_key_padding_mask)[0]
        tgt = tgt + self.dropout2(tgt2)
        tgt = self.norm2(tgt)
        # FFN
        tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
        tgt = tgt + self.dropout3(tgt2)
        tgt = self.norm3(tgt)
        return tgt

    def forward_pre(self, tgt, memory,
                    tgt_mask: Optional[Tensor] = None,
                    memory_mask: Optional[Tensor] = None,
                    tgt_key_padding_mask: Optional[Tensor] = None,
                    memory_key_padding_mask: Optional[Tensor] = None,
                    pos: Optional[Tensor] = None,
                    query_pos: Optional[Tensor] = None):
        #  # decoder tgt=(100,N,256) memory=(WH,N,256),mask=(N,WH) pos_embed=(WH,N,256) query_embed=(100,N,256)

        tgt2 = self.norm1(tgt)
        q = k = self.with_pos_embed(tgt2, query_pos)
        tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask,
                              key_padding_mask=tgt_key_padding_mask)[0]
        tgt = tgt + self.dropout1(tgt2)
        tgt2 = self.norm2(tgt)
        tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos),
                                   key=self.with_pos_embed(memory, pos),
                                   value=memory, attn_mask=memory_mask,
                                   key_padding_mask=memory_key_padding_mask)[0]
        tgt = tgt + self.dropout2(tgt2)
        tgt2 = self.norm3(tgt)
        tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
        tgt = tgt + self.dropout3(tgt2)
        return tgt

    def forward(self, tgt, memory,
                tgt_mask: Optional[Tensor] = None,
                memory_mask: Optional[Tensor] = None,
                tgt_key_padding_mask: Optional[Tensor] = None,
                memory_key_padding_mask: Optional[Tensor] = None,
                pos: Optional[Tensor] = None,
                query_pos: Optional[Tensor] = None):

        # decoder input tgt=(100,N,256) memory=(WH,N,256),mask=(N,WH) pos_embed=(WH,N,256) query_embed=(100,N,256)

        # ISFalse
        if self.normalize_before:
            return self.forward_pre(tgt, memory, tgt_mask, memory_mask,
                                    tgt_key_padding_mask, memory_key_padding_mask, pos, query_pos)
        return self.forward_post(tgt, memory, tgt_mask, memory_mask,
                                 tgt_key_padding_mask, memory_key_padding_mask, pos, query_pos)


FFN

最后是接了一個FFN,就是兩個全連接層,一個用於分類,一個用於回歸預測

分類: 一層模型結構
最終預測 MLP模型 是由具有ReLU激活功能且具有隱藏層的3層感知器和線性層計算的。 FFN預測框的標准化中心坐標,高度和寬度, 輸入圖像,然后線性層使用softmax函數預測類標簽

DETR類中
        # 輸入  hs.shape = (decoder_layers, N, 100, 256)

        # 分類 self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
        # FFN  Linear class   input=(decoder_layers, N, 100, 256)         output=(decoder_layers, N, 100, num_classes+1)
        outputs_class = self.class_embed(hs)
        
        # 預測  self.bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3)
        # MLP  Bounding box   input=(decoder_layers, N, 100, 256)         output=(decoder_layers, N, 100, 4)
        outputs_coord = self.bbox_embed(hs).sigmoid()


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM