py-faster-rcnn end2end训练时 batch size只能为1?


http://caffecn.cn/?/question/509

使用end2end的方法训练py-faster-rcnn, 把 TRAIN.IMS_PER_BATCH 设置为 2的时候会出错,显示data和label的batch size不一致。如下:

1.jpg


 
在源码lib/rpn/anchor_target_layer.py中可以看到,anchor_target_layer的top[0] 的batch size被写死为1了,

2.jpg



3.jpg


 
这应该就是为什么会出现data 和 label 的batch size不一致错误的原因吧吧吧?
 但是,为什么anchor_target_layer.py里面的batch size只能是0呢? 好像官方给的experiments\cfgs\
faster_rcnn_end2end.yml 里面也是把 TRAIN.IMS_PER_BATCH设置为1 了。   是不是end2end训练的时候,TRAIN.IMS_PER_BATCH只能为1?为什么呢?? 
可以怎样修改代码使得TRAIN.IMS_PER_BATCH 为任意值吗?

======================================================================================================================================

Training with Mini-Batch size greater than 1 #267  https://github.com/rbgirshick/py-faster-rcnn/issues/267

how to change the batchsize when training the rpn model?#51  https://github.com/rbgirshick/py-faster-rcnn/issues/51

 

@rbgirshick:::::It's just not implemented. A reasonable workaround, which is already used, is to set iter_size: N in the solver, in which N is the batch size you want (currently 2 for end to end training). This may be slightly less efficient than batching, but since the inputs are spatially large operating on one image at a time isn't too bad (plus if you use more than one image there's some wasted computation where padding is introduced to fill the 4D tensor).

 

Batch size of faster rcnn #487  https://github.com/rbgirshick/py-faster-rcnn/issues/487

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM