元學習——MAML、Reptile與ANIL
作者:凱魯嘎吉 - 博客園 http://www.cnblogs.com/kailugaji/
之前介紹過元學習——從MAML到MAML++,這次在此基礎上進一步探討,深入了解MAML的本質,引出MAML高效學習的原因究竟是快速學習,學到一個很厲害的初始化參數,還是特征重用,初始化參數與最終結果很接近?因此得到ANIL(Almost No Inner Loop),隨后我們閱讀了Reptile——On first-order meta-learning algorithms,另一種元學習方法,並比較了MAML、Reptile與模型預訓練之間的區別。
1. Meta Learning vs Machine Learning

2. MAML vs Model Pre-Training

3. MAML——Feature Reuse

4. MAML vs ANIL

MAML的目標是學習一個參數𝜃使得其經過一個梯度迭代就可以在新任務上達到最好的性能。
內循環:與具體任務相關的任務適配參數的更新(自適應具體任務)
外循環:整體任務空間上的模型參數的更新(元初始化)
5. Reptile: On First-Order Meta-Learning Algorithms

6. MAML, Model Pre-Training, and Reptile

7. 參考文獻
[1] GitHub - Fafa-DL/Lhy_Machine_Learning: 李宏毅2021春季機器學習課程課件及作業 https://github.com/Fafa-DL/Lhy_Machine_Learning
[2] Finn, C., Abbeel, P. & Levine, S. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. ICML 2017.
[3] Aniruddh Raghu, Maithra Raghu, Samy Bengio, Oriol Vinyals, Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML, ICLR, 2020.
[4] Nichol A, Achiam J, Schulman J. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018.
[5] Nichol A, Schulman J. Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2018, 2(3): 4.
[6] Reptile: A Scalable Meta-Learning Algorithm. OpenAI. https://openai.com/blog/reptile/
[7] 一文入門元學習(Meta-Learning)(附代碼) - 知乎 https://zhuanlan.zhihu.com/p/136975128
