py-faster-rcnn end2end訓練時 batch size只能為1?


http://caffecn.cn/?/question/509

使用end2end的方法訓練py-faster-rcnn, 把 TRAIN.IMS_PER_BATCH 設置為 2的時候會出錯,顯示data和label的batch size不一致。如下:

1.jpg


 
在源碼lib/rpn/anchor_target_layer.py中可以看到,anchor_target_layer的top[0] 的batch size被寫死為1了,

2.jpg



3.jpg


 
這應該就是為什么會出現data 和 label 的batch size不一致錯誤的原因吧吧吧?
 但是,為什么anchor_target_layer.py里面的batch size只能是0呢? 好像官方給的experiments\cfgs\
faster_rcnn_end2end.yml 里面也是把 TRAIN.IMS_PER_BATCH設置為1 了。   是不是end2end訓練的時候,TRAIN.IMS_PER_BATCH只能為1?為什么呢?? 
可以怎樣修改代碼使得TRAIN.IMS_PER_BATCH 為任意值嗎?

======================================================================================================================================

Training with Mini-Batch size greater than 1 #267  https://github.com/rbgirshick/py-faster-rcnn/issues/267

how to change the batchsize when training the rpn model?#51  https://github.com/rbgirshick/py-faster-rcnn/issues/51

 

@rbgirshick:::::It's just not implemented. A reasonable workaround, which is already used, is to set iter_size: N in the solver, in which N is the batch size you want (currently 2 for end to end training). This may be slightly less efficient than batching, but since the inputs are spatially large operating on one image at a time isn't too bad (plus if you use more than one image there's some wasted computation where padding is introduced to fill the 4D tensor).

 

Batch size of faster rcnn #487  https://github.com/rbgirshick/py-faster-rcnn/issues/487

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM