論文題目:《Transfer Adaptation Learning: A Decade Survey》
論文作者:Lei Zhang
論文鏈接:http://cn.arxiv.org/pdf/1903.04687.pdf
介紹
在很多實際的情況中, 源域(source domain)和目標域(target domain)之間存在:
- 分布不匹配(distribution mismatch)
- 領域偏移(domain shift)
獨立同分布(independent identical distribution, i.i.d)的假設不再滿足!
- 遷移學習(transfer learning)假設源域與目標域擁有不同的聯合概率分布
- 領域自適應(domain adaptation)假設源域與目標域擁有不同的邊緣概率分布, 但擁有相同的條件概率分布
實例權重調整自適應
當訓練集和測試集來自不同分布時, 這通常被稱為采樣選擇偏差(sample selection bias)或者協方差偏移(covariant shift).
實例權重調整方法旨在通過非參數方式對跨域特征分布匹配直接推斷出重采樣的權重.
基於直覺的權重調整
直接對原始數據進行權重調整.
首次提出於NLP領域[1], 主要的方法有著名的TrAdaBoost
[2].
基於核映射的權重調整
將原始數據映射到高維空間(如,再生核希爾伯特空間RKHS)中進行權重調整.
分布匹配
主要思想是通過重新采樣源數據的權重來匹配再生核希爾伯特空間中源數據和目標數據之間的均值.
主要有兩種非參數統計量來衡量分布差異:
- 核均值匹配(kernel mean matching, KMM)
Huang等人[3]首次提出通過調整源樣本的\(\beta\)權重系數, 使得帶權源樣本和目標樣本的KMM最小.
weighted MMD
[6]方法考慮了類別權重偏差.
樣本選擇
主要方法有基於k-means聚類的KMapWeighted
[7], 基於MMD和\(\ell_{2,1}\)-norm的TJM
[8]等.
協同訓練
主要思想是假設數據集被表征為兩個不同的視角, 使兩個分類器獨立地從每個視角中進行學習.
主要方法有CODA
[9], 以及基於GAN的RANN
[10]等.
特征自適應
特征自適應方法旨在尋找多源數據(multiple sources)的共同特征表示.
基於特征子空間
該方法假設數據可以被低維線性子空間進行表示, 即低維的格拉斯曼流形(Grassmann manifold)被嵌入到高維數據中.
通常用PCA方法來構造該流形, 使得源域和目標域可以看成流形上的兩個點, 並得到兩者的測地線距離(geodesic flow).
基於特征變換
特征變換方法旨在學習變換或投影矩陣,使得源域和目標域中的數據在某種分布度量准則下更接近.
基於投影
該方法通過減少不同域之間的邊緣分布和條件分布差異, 求解出最優的投影矩陣.
主要方法有:
基於度量
該方法通過在帶標簽的源域中學習一個好的距離度量, 使得其能夠應用於相關但不同的目標域中.
主要方法有:
基於增強
該方法假設數據的特征被分為三種類型:公共特征/源域特征/目標域特征.
主要方法有:
基於特征重構
主要方法有:
基於特征編碼
主要方法有:
分類器自適應
分類器自適應旨在利用源域中帶標簽數據和目標域中少量帶標簽數據學習一個通用的分類器.
基於核分類器
主要方法有:
-
自適應支持向量機(adaptive support vector machine, ASVM)[28]
-
基於多核學習(multiple kernel learning, MKL)的域遷移分類器[29]
基於流形正則項
主要方法有ARTL
[30],DMM
[31],MEDA
[32]等.
基於貝葉斯分類器
主要方法有核貝葉斯遷移學習KBTL
[33]等.
深度網絡自適應
2014年, Yosinski等人[34]討論了深度神經網絡中不同層特征的可遷移特性.
基於邊緣分布對齊
主要方法有:
基於條件分布對齊
主要方法有深度遷移網絡DTN
[38]
基於自動編碼器
主要方法有邊緣堆疊式降噪自動編碼器mSDA
[39]
對抗式自適應
通過對抗目標(如,域判別器)來減少域間差異.
基於梯度轉換
Ganin等人[40]首次提出可以通過添加一個簡單的梯度轉換層(gradient reversal layer, GRL)來實現領域自適應.
基於Minimax優化
Ajakan等人[41]首次結合分類損失和對抗目標, 提出了DANN
方法.
其它方法還有:
基於生成對抗網絡
主要方法有:
基准數據集
- Office-31 (3DA)
- Office+Caltech-10 (4DA)
- MNIST+USPS
- Multi-PIE
- COIL-20
- MSRC+VOC2007
- IVLSC
- Cross-dataset Testbed
- Office HomeNEW
- ImageCLEF
- P-A-C-SNEW
參考文獻
J. Jiang and C. Zhai, Instance weighting for domain adaptation in nlp, in ACL, 2007, pp. 264–271. ↩︎
W. Dai, Q. Yang, G. R. Xue, and Y. Yu, Boosting for transfer learning, in ICML, 2007, pp. 193–200. ↩︎
J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Scholkopf, Correcting sample selection bias by unlabeled data, in NIPS, 2007, pp. 1–8. ↩︎
A. Gretton, K. Borgwardt, M. Rasch, B. Schoelkopf, and A. Smola, A kernel method for the two-sample-problem, in NIPS, 2006. ↩︎
A. Gretton, K. Borgwardt, M. Rasch, B. Scholkopf, and A. Smola, A kernel two-sample test, Journal of Machine Learning Research, pp. 723–773, 2012 ↩︎
H. Yan, Y. Ding, P. Li, Q. Wang, Y. Xu, and W. Zuo, Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation, in CVPR, 2017, pp. 2272–2281 ↩︎
E. H. Zhong, W. Fan, J. Peng, K. Zhang, J. Ren, D. S. Turaga, and O. Verscheure, Cross domain distribution adaptation via kernel mapping, in ACM SIGKDD, 2009, pp. 1027–1036. ↩︎
M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, Transfer joint matching for unsupervised domain adaptation, in CVPR, 2014, pp. 1410–1417. ↩︎
M. Chen, K. Q. Weinberger, and J. C. Blitzer, Co-training for domain adaptation, in NIPS, 2011. ↩︎
Q. Chen, Y. Liu, Z. Wang, I. Wassell, and K. Chetty, Re-weighted adversarial adaptation network for unsupervised domain adaptation, in CVPR, 2018, pp. 7976–7985. ↩︎
R. Gopalan, R. Li, and R. Chellappa, Domain adaptation for object recognition: An unsupervised approach, in ICCV, 2011, pp. 999–1006 ↩︎
B. Gong, Y. Shi, F. Sha, and K. Grauman, Geodesic flow kernel for unsupervised domain adaptation, in CVPR, 2012, pp. 2066–2073 ↩︎
B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars, Unsupervised visual domain adaptation using subspace alignment, in ICCV, 2013, pp. 2960–2967. ↩︎
B. Sun and K. Saenko, Subspace distribution alignment for unsupervised domain adaptation, in BMVC, 2015, pp. 24.1–24.10. ↩︎
J. Liu and L. Zhang, Optimal projection guided transfer hashing for image retrieval, in AAAI, 2018. ↩︎
S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, Domain adaptation via transfer component analysis, IEEE Trans. Neural Networks, vol. 22, no. 2, p. 199, 2011 ↩︎
M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, Transfer feature learning with joint distribution adaptation, in ICCV, 2014, pp. 2200–2207. ↩︎
S. Si, D. Tao, and B. Geng, Bregman divergence-based regularization for transfer subspace learning, IEEE Trans. Knowledge and Data Engineering, vol. 22, no. 7, pp. 929–942, 2010. ↩︎
A. Gretton, O. Bousquet, A. Smola, and B. Scholkopf, Measuring statistical dependence with hilbert-schmidt norms, in ALT, 2005. ↩︎
Z. Ding and Y. Fu, Robust transfer metric learning for image classification, IEEE Trans. Image Processing, vol. 26, no. 2, p. 660670, 2017. ↩︎
B. Sun, J. Feng, and K. Saenko, Return of frustratingly easy domain adaptation, in AAAI, 2016, pp. 153–171. ↩︎
H. Daume III, Frustratingly easy domain adaptation, in arXiv,2009. ↩︎
R. Volpi, P. Morerio, S. Savarese, and V. Murino, Adversarial feature augmentation for unsupervised domain adaptation, in CVPR, 2018, pp. 5495–5504. ↩︎
I. H. Jhuo, D. Liu, D. T. Lee, and S. F. Chang, Robust visual domain adaptation with low-rank reconstruction, in CVPR, 2012, pp. 2168–2175. ↩︎
L. Zhang, W. Zuo, and D. Zhang, Lsdt: Latent sparse domain transfer learning for visual adaptation, IEEE Trans. Image Processing, vol. 25, no. 3, pp. 1177–1191, 2016. ↩︎
S. Shekhar, V. Patel, H. Nguyen, and R. Chellappa, Generalized domain-adaptive dictionaries, in CVPR, 2013, pp. 361–368. ↩︎
F. Zhu and L. Shao, Weakly-supervised cross-domain dictionary learning for visual recognition, International Journal of Computer Vision, vol. 109, no. 1-2, pp. 42–59, 2014. ↩︎
J. Yang, R. Yan, and A. G. Hauptmann, Cross-domain video concept detection using adaptive svms, in ACM MM, 2007, pp. 188–197. ↩︎
L. Duan, I. Tsang, D. Xu, and S. Maybank, Domain transfer svm for video concept detection, in CVPR, 2009 ↩︎
M. Long, J. Wang, G. Ding, S. Pan, and P. Yu, Adaptation regularization: a general framework for transfer learning, IEEE Trans. Knowledge and Data Engineering, vol. 26, no. 5, p. 10761089, 2014. ↩︎
Y. Cao, M. Long, and J. Wang, Unsupervised domain adaptation with distribution matching machines, in AAAI, 2018 ↩︎
J. Wang, W. Feng, Y. Chen, H. Yu, M. Huang, and P. S. Yu, Visual domain adaptation with manifold embedded distribution alignment, 2018. ↩︎
M. Gonen and A. Margolin, Kernelized bayesian transfer learning, in AAAI, 2014, pp. 1831–1839. ↩︎
J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, How transferable are features in deep neural networks, in NIPS, 2014. ↩︎
E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell, Deep domain confusion: Maximizing for domain invariance, arXiv, 2014 ↩︎
M. Long, Y. Cao, J. Wang, and M. I. Jordan, Learning transferable features with deep adaptation networks, in ICML, 2015, pp. 97–105. ↩︎
M. Long, H. Zhu, J. Wang, and M. Jordan, Deep transfer learning with joint adaptation networks, in ICML, 2017. ↩︎
X. Zhang, F. Yu, S. Wang, and S. Chang, Deep transfer network: Unsupervised domain adaptation, in arXiv, 2015. ↩︎
M. Chen, Z. Xu, K. Weinberger, and F. Sha, Marginalized denoising autoencoders for domain adaptation, in ICML, 2012 ↩︎
Y. Ganin and V. Lempitsky, Unsupervised domain adaptation by backpropagation, in arXiv, 2015. ↩︎
H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, and M. Marchand, Domain-adversarial neural network, in arXiv, 2015 ↩︎
E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, Adversarial discriminative domain adaptation, in CVPR, 2017, pp. 7167–7176 ↩︎
M. Long, Z. Cao, J. Wang, and M. I. Jordan, Conditional adversarial domain adaptation, in NIPS, 2018. ↩︎
K. Saito, K. Watanabe, Y. Ushiku, and T. Harada, Maximum classifier discrepancy for unsupervised domain adaptation, in CVPR, 2018, pp. 3723–3732. ↩︎
J. Hoffman, E. Tzeng, T. Park, and J. Zhu, Cycada: Cycleconsistent adversarial domain adaptation, in ICML, 2018. ↩︎
L. Hu, M. Kan, S. Shan, and X. Chen, Duplex generative adversarial network for unsupervised domain adaptation, in CVPR, 2018, pp. 1498–1507. ↩︎