Deformable 可變形的DETR
This repository is an official implementation of the paper Deformable DETR: Deformable Transformers for End-to-End Object Detection.
該存儲庫是論文《可變形DETR:用於端到端對象檢測的可變形變壓器》的正式實現。
https://github.com/fundamentalvision/deformable-detr
Introduction
Deformable DETR is an efficient and fast-converging end-to-end object detector. It mitigates the high complexity and slow convergence issues of DETR via a novel sampling-based efficient attention mechanism.
可變形DETR是一種高效且快速收斂的端到端對象檢測器。通過一種新穎的基於采樣的有效注意力機制,緩解了DETR的高復雜性和緩慢收斂的問題。
Abstract摘要
DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10× less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach.
最近提出了DETR,以消除目標檢測中對許多手工設計組件的需求,同時表現出良好的性能。但是,由於Transformer注意模塊在處理圖像特征圖時的局限性,它收斂緩慢且特征空間分辨率有限。為了緩解這些問題,提出了可變形DETR,其關注模塊僅關注參考周圍的一小部分關鍵采樣點。可變形的DETR可以比DETR(尤其是在小物體上)獲得更好的性能,訓練時間減少10倍。在COCO Benchmark數據集上進行的大量實驗證明了方法的有效性。
License
This project is released under the Apache 2.0 license.
項目是根據Apache 2.0許可發布的。
Changelog
See changelog.md for detailed logs of major changes.
有關主要更改的詳細日志,請參見changelog.md。
Citing 引用可變形Deformable DETR
If you find Deformable DETR useful in your research, please consider citing:
如果發現Deformable可變形DETR在研究中很有用,考慮引用以下內容:
@article{zhu2020deformable,
title={Deformable DETR: Deformable Transformers for End-to-End Object Detection},
author={Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng},
journal={arXiv preprint arXiv:2010.04159},
year={2020}
}
Main Results
Note:
- All models of Deformable DETR are trained with total batch size of 32.
- Training and inference speed are measured on NVIDIA Tesla V100 GPU.
- "Deformable DETR (single scale)" means only using res5 feature map (of stride 32) as input feature maps for Deformable Transformer Encoder.
- "DC5" means removing the stride in C5 stage of ResNet and add a dilation of 2 instead.
- "DETR-DC5+" indicates DETR-DC5 with some modifications, including using Focal Loss for bounding box classification and increasing number of object queries to 300.
- "Batch Infer Speed" refer to inference with batch size = 4 to maximize GPU utilization.
- The original implementation is based on our internal codebase. There are slight differences in the final accuracy and running time due to the plenty details in platform switch.
筆記:
- 所有可變形DETR的模型都經過訓練,總批次大小為32。
- 訓練和推理速度是在NVIDIA Tesla V100 GPU上測量的。
- “可變形DETR(單比例)”表示僅將(步幅32的)res5特征圖用作可變形變壓器編碼器的輸入特征圖。
- “ DC5”表示消除ResNet的C5階段的步幅,而改為增加2。
- “ DETR-DC5 +”表示對DETR-DC5進行了一些修改,包括使用Focal Loss進行邊界框分類以及將目標查詢數增加到300。
- “批處理推斷速度”指的是批處理大小= 4以最大程度地利用GPU的推理。
- 原始實現基於內部代碼庫。由於平台切換器中的大量細節,最終精度和運行時間略有不同。
Installation
Requirements
- Linux, CUDA>=9.2, GCC>=5.4
- Python>=3.7
We recommend you to use Anaconda to create a conda environment: 建議使用Anaconda創建一個conda環境:
conda create -n deformable_detr python=3.7 pip
Then, activate the environment:
conda activate deformable_detr
- PyTorch>=1.5.1, torchvision>=0.6.1
For example, if your CUDA version is 9.2, you could install pytorch and torchvision as following: 如果CUDA版本是9.2,則可以按以下方式安裝pytorch和torchvision:
conda install pytorch=1.5.1 torchvision=0.6.1 cudatoolkit=9.2 -c pytorch
- Other requirements
pip install -r requirements.txt
Compiling CUDA operators
cd ./models/ops
sh ./make.sh
# unit test (should see all checking is True)
python test.py
Usage
Dataset preparation
Please download COCO 2017 dataset and organize them as following: 請下載COCO 2017數據集並按以下方式組織它們:
code_root/
└── data/
└── coco/
├── train2017/
├── val2017/
└── annotations/
├── instances_train2017.json
└── instances_val2017.json
Training
Training on single node
For example, the command for training Deformable DETR on 8 GPUs is as following: 例如,用於在8個GPU上訓練可變形DETR的命令如下:
GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 8 ./configs/r50_deformable_detr.sh
Training on multiple nodes
For example, the command for training Deformable DETR on 2 nodes of each with 8 GPUs is as following: 例如,用於在每個具有8個GPU的2個節點上訓練Deformable DETR的命令如下:
On node 1:
MASTER_ADDR=<IP address of node 1> NODE_RANK=0 GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 16 ./configs/r50_deformable_detr.sh
On node 2:
MASTER_ADDR=<IP address of node 1> NODE_RANK=1 GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 16 ./configs/r50_deformable_detr.sh
Training on slurm cluster
If you are using slurm cluster, you can simply run the following command to train on 1 node with 8 GPUs: 如果使用的是Slurm集群,只需運行以下命令即可在具有8個GPU的1個節點上進行訓練:
GPUS_PER_NODE=8 ./tools/run_dist_slurm.sh <partition> deformable_detr 8 configs/r50_deformable_detr.sh
Or 2 nodes of each with 8 GPUs:
GPUS_PER_NODE=8 ./tools/run_dist_slurm.sh <partition> deformable_detr 16 configs/r50_deformable_detr.sh
Some tips to speed-up training
- If your file system is slow to read images, you may consider enabling '--cache_mode' option to load whole dataset into memory at the beginning of training.
- You may increase the batch size to maximize the GPU utilization, according to GPU memory of yours, e.g., set '--batch_size 3' or '--batch_size 4'.
- 如果文件系統讀取圖像的速度較慢,則可以考慮在訓練開始時啟用'--cache_mode'選項以將整個數據集加載到內存中。
- 可以根據自己的GPU內存來增加批處理大小以最大程度地利用GPU,例如,設置'--batch_size 3'或'--batch_size 4'。
Evaluation
You can get the config file and pretrained model of Deformable DETR (the link is in "Main Results" session), then run following command to evaluate it on COCO 2017 validation set:
可以獲取可變形DETR的配置文件和預訓練模型(鏈接在“主要結果”會話中),然后運行以下命令在COCO 2017驗證集中對其進行評估:
<path to config file> --resume <path to pre-trained model> --eval
You can also run distributed evaluation by using ./tools/run_dist_launch.sh or ./tools/run_dist_slurm.sh.