DRNet
2021-Arxiv-Dynamic Resolution Network (06.05)
來源: ChenBong 博客園
- Institute:Zhejiang University, Huawei Noah’s Ark Lab
- Author:Mingjian Zhu, Yunhe Wang
- GitHub:/
- Citation:/
Introduction
instance-aware 的動態分辨率網絡
在backbone網絡之前, 增加一個resolution predictor(Conv+FC) 模塊, 輸入為原始圖片, 輸出為識別該圖片所需的分辨率, 將該圖片resize成所預測的分辨率, 再輸入后續的backbone網絡
引入resolution predictor會增加計算量, 但減小分辨率可以在backbone網絡中減少計算量 (需要滿足: 收益>增加的開銷)
Motivation

Contribution
Method

Resolution Predictor
Resolution Predictor \(R\) 根據輸入樣本 \(X\) , 輸出一個soft概率向量 \(R(X)=p_{r}=\left[p_{r_{1}}, p_{r_{2}}, \ldots, p_{r_{m}}\right] \qquad(1)\)
根據soft概率向量 \(p_r\) 采樣得到one-hot向量 \(h\) , 選中對應的 resolution 輸入到backbone網絡進行訓練
采樣的過程是不可導的(梯度無法反向傳播到predictor的參數上), 采用Gumbel-Softmax trick, 將采樣節點分離到計算圖之外,使predictor參數也可以接受反向傳播:

e.g. \(z\sim N(x, \phi^2)\) ==> \(\varepsilon \sim N(0,1), z=x+\varepsilon \cdot \phi\)
\(\mathbb{G}(R(X))=\mathbb{G}\left(p_{r}\right)=h=[h_1, h_2, ..., h_j] \qquad (2)\)
\(h_{j}=\frac{\exp \left(\log \left(\pi_{j}\right)+g_{j}\right) / \tau}{\sum_{j=1}^{m} \exp \left(\left(\log \left(\pi_{j}\right)+g_{j} / \tau\right)\right)} \qquad (11)\) , 其中 \(g_i\) 是Gumbel噪聲, 服從Gumbel分布; \(τ\) 為溫度系數( \(\tau\) 取很小的值, \(h\) 即為one-hot向量)
選中對應的分辨率, 輸入到后續的backbone model: \(\hat{X}=\sum_{j=1}^{m} h_{j} X_{r_{j}} \qquad (4)\)
Resolution-aware BN
每個分辨率一個私有BN (ps. 訓練時應該沒有影響, inference時無需校正BN)
Loss
CE Loss
\(L_{c e}=\mathcal{H}(\hat{y}, y) \qquad (5)\)
FLOPs regularization Loss
\(F=\sum_{j=1}^{m}\left(C_{j} \cdot h_{j}\right) \qquad (6)\) , F是某個樣本真實的推理FLOPs, \(C_j\) 是第 \(j\) 個分辨率對應的FLOPs, \(h_j\) 是one-hot向量, 一共m種分辨率
\(L_{r e g}=\max \left(0, \frac{\mathbb{E}(F)-\alpha}{C_{\max }-C_{\min }}\right) \qquad (7)\) , 其中 \(\alpha\) 是目標FLOPs, 超過目標FLOPs時施加懲罰
Total Loss
\(L=L_{c e}+\eta L_{r e g} \qquad (8)\) , 其中 \(\eta\) 是超參數
Experiments
Setting
- dateset
- ImageNet 100
- ImageNet 1k
- resolution
- ResNet: [1.0x, 0.75x, 0.5x] of 224x224
- MobileNetV2: [1.0x, 0.875x, 0.75x] of 224x224
- resolution predictor model
- ResNet
- 4-stage residual network, each stage contains one residual basic block
- input resolution 128×128
- FLOPs=300M
- MobileNet
- 4-stage residual network, each stage contains one inverted residual block
- input resolution 64×64
- FLOPs沒講 &&
- ResNet
- train pre-train backbone model without resolution predictor
- ResNet
- different resolution
- epoch=70, batch size=256, weight decay=1e-4, momentum=0.9
- init lr=0.1, decay by 10 every 20 epoch
- MobileNet V2
- follows the original paper (&& 250 epoch?)
- ResNet
- train resolution predictor model && fine-tune backbone model
- ResNet
- epoch=100
- init lr=0.1, decay by 10 every 30 epoch
- MobileNet V2
- follows the original paper (&& 250 epoch?)
- ResNet
- GPU: V100
ImageNet 100 (Ablation)

表1, 3/4行, 對RA-BN做消融
表2, 對Loss項中的超參 \(\eta\) 和 flops約束 \(\alpha\) 做消融
ImageNet 1k


Other
Comparison with filter pruning

與剪枝的方法正交, 可以與剪枝方法結合(表4 中)
Comparison with random sample

Latency

可以實現實際的加速 &&測速?
Visualization

Conclusion
Summary
cons:
- Resolution predictor module
- 設計過於復雜, 計算開銷太大(300M FLOPs)
- 不統一(不同backbone使用不同的predictor結構)
- 實際上並不是end-to-end的訓練
- latency實驗中, 測速的問題
- gumbel softmax只解決了從概率分布p采樣onehot p的梯度回傳, resize部分是如何將backbone梯度回傳到resolution Predictor的?
To Read
Reference
Gumbel-Softmax Trick和Gumbel分布 - initial_h - 博客園 (cnblogs.com)