anchor_target_layer層解讀


 總結下來,用generate_anchors產生多種坐標變換,這種坐標變換由scale和ratio來,相當於提前計算好。anchor_target_layer先計算的是從feature map映射到原圖的中點坐標,然后根據多種坐標變換生成不同的框。

anchor_target_layer層是產生在rpn訓練階段產生anchors的層

源代碼:

  1 # --------------------------------------------------------
  2 # Faster R-CNN
  3 # Copyright (c) 2015 Microsoft
  4 # Licensed under The MIT License [see LICENSE for details]
  5 # Written by Ross Girshick and Sean Bell
  6 # --------------------------------------------------------
  7 
  8 import os
  9 import caffe
 10 import yaml
 11 from fast_rcnn.config import cfg
 12 import numpy as np
 13 import numpy.random as npr
 14 from generate_anchors import generate_anchors
 15 from utils.cython_bbox import bbox_overlaps
 16 from fast_rcnn.bbox_transform import bbox_transform
 17 
 18 DEBUG = False
 19 
 20 class AnchorTargetLayer(caffe.Layer):
 21     """
 22     Assign anchors to ground-truth targets. Produces anchor classification
 23     labels and bounding-box regression targets.
 24     """
 25 
 26     def setup(self, bottom, top):
 27         layer_params = yaml.load(self.param_str_)
 28         anchor_scales = layer_params.get('scales', (8, 16, 32))
 29         self._anchors = generate_anchors(scales=np.array(anchor_scales))  #generate_anchors函數根據ratio和scale產生坐標變換,這些坐標變換是讓中心點產生不同的anchor
 30         self._num_anchors = self._anchors.shape[0]                
 31         self._feat_stride = layer_params['feat_stride']             #feat_stride和roi_pooling中的spatial_scale是對應的,一個是16,一個是16分之
                                                   一,一個是把中心點坐標從feature map映射到原圖,一個是把原圖roi框坐標映射到feature map
32 33 if DEBUG: 34 print 'anchors:' 35 print self._anchors 36 print 'anchor shapes:' 37 print np.hstack(( 38 self._anchors[:, 2::4] - self._anchors[:, 0::4], 39 self._anchors[:, 3::4] - self._anchors[:, 1::4], 40 )) 41 self._counts = cfg.EPS 42 self._sums = np.zeros((1, 4)) 43 self._squared_sums = np.zeros((1, 4)) 44 self._fg_sum = 0 45 self._bg_sum = 0 46 self._count = 0 47 48 # allow boxes to sit over the edge by a small amount 49 self._allowed_border = layer_params.get('allowed_border', 0) 50 51 height, width = bottom[0].data.shape[-2:]               52 if DEBUG: 53 print 'AnchorTargetLayer: height', height, 'width', width 54 55 A = self._num_anchors 56 # labels 57 top[0].reshape(1, 1, A * height, width) 58 # bbox_targets 59 top[1].reshape(1, A * 4, height, width) #reshape輸出的形狀 60 # bbox_inside_weights 61 top[2].reshape(1, A * 4, height, width) 62 # bbox_outside_weights 63 top[3].reshape(1, A * 4, height, width) 64 65 def forward(self, bottom, top): 66 # Algorithm: 67 # 68 # for each (H, W) location i 69 # generate 9 anchor boxes centered on cell i 70 # apply predicted bbox deltas at cell i to each of the 9 anchors 71 # filter out-of-image anchors 72 # measure GT overlap 73 74 assert bottom[0].data.shape[0] == 1, \ 75 'Only single item batches are supported' 76 77 # map of shape (..., H, W) 78 height, width = bottom[0].data.shape[-2:]      #得到特征提取層最后一層feature map的高度和寬度,具體原因講解看代碼框下面的分析 79 # GT boxes (x1, y1, x2, y2, label) 80 gt_boxes = bottom[1].data 81 # im_info 82 im_info = bottom[2].data[0, :] 83 84 if DEBUG: 85 print '' 86 print 'im_size: ({}, {})'.format(im_info[0], im_info[1]) 87 print 'scale: {}'.format(im_info[2]) 88 print 'height, width: ({}, {})'.format(height, width) 89 print 'rpn: gt_boxes.shape', gt_boxes.shape 90 print 'rpn: gt_boxes', gt_boxes 91 92 # 1. Generate proposals from bbox deltas and shifted anchors 93 shift_x = np.arange(0, width) * self._feat_stride 94 shift_y = np.arange(0, height) * self._feat_stride 95 shift_x, shift_y = np.meshgrid(shift_x, shift_y) 96 shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), 97 shift_x.ravel(), shift_y.ravel())).transpose() 98 # add A anchors (1, A, 4) to 99 # cell K shifts (K, 1, 4) to get 100 # shift anchors (K, A, 4) 101 # reshape to (K*A, 4) shifted anchors 102 A = self._num_anchors 103 K = shifts.shape[0] 104 all_anchors = (self._anchors.reshape((1, A, 4)) + 105 shifts.reshape((1, K, 4)).transpose((1, 0, 2))) 106 all_anchors = all_anchors.reshape((K * A, 4)) 107 total_anchors = int(K * A) 108 109 # only keep anchors inside the image 110 inds_inside = np.where( 111 (all_anchors[:, 0] >= -self._allowed_border) & 112 (all_anchors[:, 1] >= -self._allowed_border) & 113 (all_anchors[:, 2] < im_info[1] + self._allowed_border) & # width 114 (all_anchors[:, 3] < im_info[0] + self._allowed_border) # height 115 )[0] 116 117 if DEBUG: 118 print 'total_anchors', total_anchors 119 print 'inds_inside', len(inds_inside) 120 121 # keep only inside anchors 122 anchors = all_anchors[inds_inside, :] 123 if DEBUG: 124 print 'anchors.shape', anchors.shape 125 126 # label: 1 is positive, 0 is negative, -1 is dont care 127 labels = np.empty((len(inds_inside), ), dtype=np.float32) 128 labels.fill(-1) 129 130 # overlaps between the anchors and the gt boxes 131 # overlaps (ex, gt) 132 overlaps = bbox_overlaps( 133 np.ascontiguousarray(anchors, dtype=np.float), 134 np.ascontiguousarray(gt_boxes, dtype=np.float)) 135 argmax_overlaps = overlaps.argmax(axis=1)              #argmax_overlaps是每個anchor對應最大overlap的gt_boxes的下標 136 max_overlaps = overlaps[np.arange(len(inds_inside)), argmax_overlaps] 137 gt_argmax_overlaps = overlaps.argmax(axis=0)            #gt_argmax_overlaps是每個gt_boxes對應最大overlap的anchor的下標 138 gt_max_overlaps = overlaps[gt_argmax_overlaps, 139 np.arange(overlaps.shape[1])] 140 gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0] 141 142 if not cfg.TRAIN.RPN_CLOBBER_POSITIVES: 143 # assign bg labels first so that positive labels can clobber them 144 labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 145 146 # fg label: for each gt, anchor with highest overlap 147 labels[gt_argmax_overlaps] = 1 148 149 # fg label: above threshold IOU 150 labels[max_overlaps >= cfg.TRAIN.RPN_POSITIVE_OVERLAP] = 1 151 152 if cfg.TRAIN.RPN_CLOBBER_POSITIVES: 153 # assign bg labels last so that negative labels can clobber positives 154 labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0 155 156 # subsample positive labels if we have too many 157 num_fg = int(cfg.TRAIN.RPN_FG_FRACTION * cfg.TRAIN.RPN_BATCHSIZE) 158 fg_inds = np.where(labels == 1)[0] 159 if len(fg_inds) > num_fg: 160 disable_inds = npr.choice( 161 fg_inds, size=(len(fg_inds) - num_fg), replace=False) 162 labels[disable_inds] = -1 163 164 # subsample negative labels if we have too many 165 num_bg = cfg.TRAIN.RPN_BATCHSIZE - np.sum(labels == 1) 166 bg_inds = np.where(labels == 0)[0] 167 if len(bg_inds) > num_bg: 168 disable_inds = npr.choice( 169 bg_inds, size=(len(bg_inds) - num_bg), replace=False) 170 labels[disable_inds] = -1 171 #print "was %s inds, disabling %s, now %s inds" % ( 172 #len(bg_inds), len(disable_inds), np.sum(labels == 0)) 173 174 bbox_targets = np.zeros((len(inds_inside), 4), dtype=np.float32) 175 bbox_targets = _compute_targets(anchors, gt_boxes[argmax_overlaps, :]) 176 177 bbox_inside_weights = np.zeros((len(inds_inside), 4), dtype=np.float32) 178 bbox_inside_weights[labels == 1, :] = np.array(cfg.TRAIN.RPN_BBOX_INSIDE_WEIGHTS) 179 180 bbox_outside_weights = np.zeros((len(inds_inside), 4), dtype=np.float32) 181 if cfg.TRAIN.RPN_POSITIVE_WEIGHT < 0: 182 # uniform weighting of examples (given non-uniform sampling) 183 num_examples = np.sum(labels >= 0) 184 positive_weights = np.ones((1, 4)) * 1.0 / num_examples 185 negative_weights = np.ones((1, 4)) * 1.0 / num_examples 186 else: 187 assert ((cfg.TRAIN.RPN_POSITIVE_WEIGHT > 0) & 188 (cfg.TRAIN.RPN_POSITIVE_WEIGHT < 1)) 189 positive_weights = (cfg.TRAIN.RPN_POSITIVE_WEIGHT / 190 np.sum(labels == 1)) 191 negative_weights = ((1.0 - cfg.TRAIN.RPN_POSITIVE_WEIGHT) / 192 np.sum(labels == 0)) 193 bbox_outside_weights[labels == 1, :] = positive_weights 194 bbox_outside_weights[labels == 0, :] = negative_weights 195 196 if DEBUG: 197 self._sums += bbox_targets[labels == 1, :].sum(axis=0) 198 self._squared_sums += (bbox_targets[labels == 1, :] ** 2).sum(axis=0) 199 self._counts += np.sum(labels == 1) 200 means = self._sums / self._counts 201 stds = np.sqrt(self._squared_sums / self._counts - means ** 2) 202 print 'means:' 203 print means 204 print 'stdevs:' 205 print stds 206 207 # map up to original set of anchors 208 labels = _unmap(labels, total_anchors, inds_inside, fill=-1) 209 bbox_targets = _unmap(bbox_targets, total_anchors, inds_inside, fill=0) 210 bbox_inside_weights = _unmap(bbox_inside_weights, total_anchors, inds_inside, fill=0) 211 bbox_outside_weights = _unmap(bbox_outside_weights, total_anchors, inds_inside, fill=0) 212 213 if DEBUG: 214 print 'rpn: max max_overlap', np.max(max_overlaps) 215 print 'rpn: num_positive', np.sum(labels == 1) 216 print 'rpn: num_negative', np.sum(labels == 0) 217 self._fg_sum += np.sum(labels == 1) 218 self._bg_sum += np.sum(labels == 0) 219 self._count += 1 220 print 'rpn: num_positive avg', self._fg_sum / self._count 221 print 'rpn: num_negative avg', self._bg_sum / self._count 222 223 # labels 224 labels = labels.reshape((1, height, width, A)).transpose(0, 3, 1, 2) 225 labels = labels.reshape((1, 1, A * height, width)) 226 top[0].reshape(*labels.shape) 227 top[0].data[...] = labels 228 229 # bbox_targets 230 bbox_targets = bbox_targets \ 231 .reshape((1, height, width, A * 4)).transpose(0, 3, 1, 2) 232 top[1].reshape(*bbox_targets.shape) 233 top[1].data[...] = bbox_targets 234 235 # bbox_inside_weights 236 bbox_inside_weights = bbox_inside_weights \ 237 .reshape((1, height, width, A * 4)).transpose(0, 3, 1, 2) 238 assert bbox_inside_weights.shape[2] == height 239 assert bbox_inside_weights.shape[3] == width 240 top[2].reshape(*bbox_inside_weights.shape) 241 top[2].data[...] = bbox_inside_weights 242 243 # bbox_outside_weights 244 bbox_outside_weights = bbox_outside_weights \ 245 .reshape((1, height, width, A * 4)).transpose(0, 3, 1, 2) 246 assert bbox_outside_weights.shape[2] == height 247 assert bbox_outside_weights.shape[3] == width 248 top[3].reshape(*bbox_outside_weights.shape) 249 top[3].data[...] = bbox_outside_weights 250 251 def backward(self, top, propagate_down, bottom): 252 """This layer does not propagate gradients.""" 253 pass 254 255 def reshape(self, bottom, top): 256 """Reshaping happens during the call to forward.""" 257 pass 258 259 260 def _unmap(data, count, inds, fill=0): 261 """ Unmap a subset of item (data) back to the original set of items (of 262 size count) """ 263 if len(data.shape) == 1: 264 ret = np.empty((count, ), dtype=np.float32) 265 ret.fill(fill) 266 ret[inds] = data 267 else: 268 ret = np.empty((count, ) + data.shape[1:], dtype=np.float32) 269 ret.fill(fill) 270 ret[inds, :] = data 271 return ret 272 273 274 def _compute_targets(ex_rois, gt_rois): 275 """Compute bounding-box regression targets for an image.""" 276 277 assert ex_rois.shape[0] == gt_rois.shape[0] 278 assert ex_rois.shape[1] == 4 279 assert gt_rois.shape[1] == 5 280 281 return bbox_transform(ex_rois, gt_rois[:, :4]).astype(np.float32, copy=False)
self._anchors:

我采用的是模型默認的3x3的anchor設置,np array的shape是(9,4)。第一個坐標是中心點的x坐標需要變化的值,生成的是框的最小x;第二個坐標是中心點的y坐標需要變化的值,生成的是框的最小y,這兩個組成了框的左上坐標。第三個坐標是中心點的x坐標需要變化的值,生成的是框的最大x;第四個坐標是中心點的y坐標需要變化的值,生成的是框的最大y,這兩個組成了框的右下坐標。

shift_x:

shift_x的長度是61,實際上這就是最后一層feature map的寬度大小。從0到60依次取整數,然后乘以16構成了shift_x。0到60是feature map上的每一個坐標點,也是anchor的中心點,乘以16之后就映射到了原圖的坐標,這些就成了anchor在原圖的中心點。

shift_y:

shift_y的長度是39,是最后一層feature map的長度大小,其他和shift_x類似。

 
        

 shift_x, shift_y = np.meshgrid(shift_x, shift_y),以下是這段代碼生成的shift_x和shift_y:

  

shift_x和shift_y都變成了39x91的array,不同的是shift_x是按照行重復了39行,shift_y是按照列重復了61列

 

shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose(),以下是這段代碼生成的shifts:


shifts變成了2379x4的形狀,shift_x.ravel()是把之前的61x39的shift_x reshape成2379x1的形狀然后做第一列和第三列,shift_y.ravel()是把之前的61x39的shift_y reshape成2379x1的形狀然后做
第二列和第四列

 之前一直沒搞懂為什么要弄兩個shift_x。原因是,你要進行anchor的坐標變換是基於中心點進行加減,這一步生成的就是2379個anchor的中心點坐標。中心點坐標是二維的,只有x和y,但是因為之后需要進行坐標變換,即從anchor坐標中心點生成anchor框,anchor框是左上右下4個點,所以變成了4維。第一列生成的是框的最小x,第三列生成的是框的最大x,這兩個都需要在中心點  的x坐標下進行加減變化。同理,第二列和第四列是在中心點的y坐標上進行操作的。

 

 self._anchors.reshape((1, A, 4))進行的變化如下圖,實際上增加了一維:

 

 all_anchors = (self._anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2)))生成的all_anchors如下圖:

 

self._anchors的shape是(1,9,4),shifts經過變換后變成(2379,1,4),得到的all_anchors的shape是(2379,9,4),相當於把2379個中點坐標分別和9個anchor變換坐標相加(我在numpy里面寫了一個+的計算總結,類似的)

比如第一維的第一個就是(0,0,0,0)和9個anchor坐標變換分別相加:

all_anchors = all_anchors.reshape((K * A, 4))就會生成2379*9個roi框坐標


layer {
  name: 'rpn-data'
  type: 'Python'
  bottom: 'rpn_cls_score'
  bottom: 'gt_boxes'
  bottom: 'im_info'
  bottom: 'data'
  top: 'rpn_labels'
  top: 'rpn_bbox_targets'
  top: 'rpn_bbox_inside_weights'
  top: 'rpn_bbox_outside_weights'
  python_param {
    module: 'rpn.anchor_target_layer'
    layer: 'AnchorTargetLayer'
    param_str: "'feat_stride': 16"
  }
}

這是rpn-data層的prototxt,可以看到輸入4個,輸出4個

 height, width = bottom[0].data.shape[-2:]

這一段代碼得到的是特征提取層最后一層的高度和寬度。為什么呢?bottom[0]是rpn_cls_score,這是經過rpn3x3卷積和1x1卷積得到的某一類的框為前景、背景的預測概率值,可以發現這一個feature map的shape是(2*k,height,width)。這里的height,width實際上和特征層提取層最后一層的height,width一樣大,因為這個3x3卷積stride和pad為1,1x1卷積本身不改變feature map的尺寸。論文中是在特征提取層最后一層進行rpn的滑動,在實際代碼中,用rpn_cls_score的shape替代了特征提取層最后一層卷積。所以rpn_cls_score並不是要給rpn-data層輸入概率值,而只是傳rpn滑動所需的shape。

.data表示提取具體的data,.shape就是這個具體data的形狀,[-2:]就是提取shape的倒數第二位到最后一位。

gt_boxes是輸入的標准框,im_info包含了圖片的尺寸(注意不是feature map尺寸,而是原圖),data是這個圖片本身的所有像素組成的array。

if DEBUG:
 85             print ''
 86             print 'im_size: ({}, {})'.format(im_info[0], im_info[1])
 87             print 'scale: {}'.format(im_info[2])
 88             print 'height, width: ({}, {})'.format(height, width)
 89             print 'rpn: gt_boxes.shape', gt_boxes.shape
 90             print 'rpn: gt_boxes', gt_boxes

從這個debug部分可以輕松看出這些信息。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM