大家好,我是對白。
2021即將結束了,你今年讀了多少論文?
雖然世界仍在從新冠疫情的破壞中復蘇,人們無法向從前那樣時常線下相聚、共同探討交流關於學術領域的最新問題,但AI研究也沒有停下躍進的步伐。
轉眼就是2021年底了,一年就這么就過去了,時光好像被偷走一樣。細細數來,你今年讀了多少論文?
一名加拿大博主Louis Bouchard以發布時間為順序,整理出了近40篇2021年不可錯過的優秀論文。整體來看,合集中的論文偏重計算機視覺方向。
在這個15分鍾左右的視頻中,你可以快速瀏覽這些論文:
已關注
關注
重播 分享 贊
切換到豎屏全屏 退出全屏
對白的算法屋 已關注
分享 點贊 在看
已同步到看一看[寫下你的評論](javascript:😉
[](javascript:😉
分享視頻
,時長 16:08
0 / 0
00:00 / 16:08
切換到橫屏模式
繼續播放
轉載
,
2021年不可錯過的40篇AI論文,你都讀過嗎?
對白的算法屋 已關注
分享 點贊 在看
已同步到看一看[寫下你的評論](javascript:😉
進度條,百分之0
[播放](javascript:😉
00:00
/
16:08
16:08
全屏
倍速播放中
[0.5倍](javascript:😉 [0.75倍](javascript:😉 [1.0倍](javascript:😉 [1.5倍](javascript:😉 [2.0倍](javascript:😉
[超清](javascript:😉 [高清](javascript:😉 [流暢](javascript:😉
您的瀏覽器不支持 video 標簽
繼續觀看
2021年不可錯過的40篇AI論文,你都讀過嗎?
[視頻詳情](javascript:😉
以下是每篇論文的詳細信息:
**1、DALL·E: Zero-Shot Text-to-Image Generation from OpenAI
**
論文鏈接:https://arxiv.org/pdf/2102.12092.pdf
代碼地址:https://github.com/openai/DALL-E
視頻解讀:https://youtu.be/DJToDLBPovg
2、VOGUE: Try-On by StyleGAN Interpolation Optimization
論文鏈接:https://vogue-try-on.github.io/static_files/resources/VOGUE-virtual-try-on.pdf
視頻解讀:https://youtu.be/i4MnLJGZbaM
**3、Taming Transformers for High-Resolution Image Synthesis
**
論文鏈接:https://compvis.github.io/taming-transformers/
代碼地址:https://github.com/CompVis/taming-transformers
視頻解讀:https://youtu.be/JfUTd8fjtX8
4、Thinking Fast And Slow in AI
論文鏈接:https://arxiv.org/abs/2010.06002
視頻解讀:https://youtu.be/3nvAaVSQxs4
5、Automatic detection and quantification of floating marine macro-litter in aerial images
論文鏈接:https://doi.org/10.1016/j.envpol.2021.116490
代碼地址:https://github.com/amonleong/MARLIT
視頻解讀:https://youtu.be/2dTSsdW0WYI
6、ShaRF: Shape-conditioned Radiance Fields from a Single View
論文鏈接:https://arxiv.org/abs/2102.08860
代碼地址:http://www.krematas.com/sharf/index.html
視頻解讀:https://youtu.be/gHkkrNMlGNg
7、Generative Adversarial Transformers
論文鏈接:https://arxiv.org/pdf/2103.01209.pdf
代碼地址:https://github.com/dorarad/gansformer
視頻解讀:https://youtu.be/HO-_t0UArd4
8、We Asked Artificial Intelligence to Create Dating Profiles. Would You Swipe Right?
論文鏈接:https://studyonline.unsw.edu.au/blog/ai-generated-dating-profile
代碼地址:https://colab.research.google.com/drive/1VLG8e7YSEwypxU-noRNhsv5dW4NfTGce#forceEdit=true&sandboxMode=true&scrollTo=aeXshJM-Cuaf
視頻解讀:https://youtu.be/IoRH5u13P-4
9、Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
論文鏈接:https://arxiv.org/abs/2103.14030v2
代碼地址:https://github.com/microsoft/Swin-Transformer
視頻解讀:https://youtu.be/QcCJJOLCeJQ
10、IMAGE GANS MEET DIFFERENTIABLE RENDERING FOR INVERSE GRAPHICS AND INTERPRETABLE 3D NEURAL RENDERING
論文鏈接:https://arxiv.org/pdf/2010.09125.pdf
視頻解讀:https://youtu.be/dvjwRBZ3Hnw
11、Deep nets: What have they ever done for vision?
論文鏈接:https://arxiv.org/abs/1805.04025
視頻解讀:https://youtu.be/GhPDNzAVNDk
12、Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image
論文鏈接:https://arxiv.org/pdf/2012.09855.pdf
代碼地址:https://github.com/google-research/google-research/tree/master/infinite_nature
視頻解讀:https://youtu.be/NIOt1HLV_Mo
在線試用:https://colab.research.google.com/github/google-research/google-research/blob/master/infinite_nature/infinite_nature_demo.ipynb#scrollTo=sCuRX1liUEVM
13、Portable, Self-Contained Neuroprosthetic Hand with Deep Learning-Based Finger Control
論文鏈接:https://arxiv.org/abs/2103.13452
視頻解讀:https://youtu.be/wNBrCRzlbVw
14、Total Relighting: Learning to Relight Portraits for Background Replacement
論文鏈接:https://augmentedperception.github.io/total_relighting/total_relighting_paper.pdf
視頻解讀:https://youtu.be/rVP2tcF_yRI
15、LASR: Learning Articulated Shape Reconstruction from a Monocular Video
論文鏈接:https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_LASR_Learning_Articulated_Shape_Reconstruction_From_a_Monocular_Video_CVPR_2021_paper.pdf
代碼地址:https://github.com/google/lasr
視頻解讀:https://youtu.be/lac7wqjS-8E
16、Enhancing Photorealism Enhancement
論文鏈接:http://vladlen.info/papers/EPE.pdf
代碼地址:https://github.com/isl-org/PhotorealismEnhancement
視頻解讀:https://youtu.be/3rYosbwXm1w
17、DefakeHop: A Light-Weight High-Performance Deepfake Detector
論文鏈接:https://arxiv.org/abs/2103.06929
視頻解讀:https://youtu.be/YMir8sRWRos
18、High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network
論文鏈接:https://arxiv.org/pdf/2105.09188.pdf
代碼地址:https://github.com/csjliang/LPTN
視頻解讀:https://youtu.be/X7WzlAyUGPo
**19、Barbershop: GAN-based Image Compositing using Segmentation Masks
**
論文鏈接:https://arxiv.org/pdf/2106.01505.pdf
代碼地址:https://github.com/ZPdesu/Barbershop
視頻解讀:https://youtu.be/HtqYMvBVJD8
**20、TextStyleBrush: Transfer of text aesthetics from a single example
**
論文鏈接:https://arxiv.org/abs/2106.08385
代碼地址:https://github.com/facebookresearch/IMGUR5K-Handwriting-Dataset?fbclid=IwAR0pRAxhf8Vg-5H3fA0BEaRrMeD21HfoCJ-so8V0qmWK7Ub21dvy_jqgiVo
視頻解讀:https://youtu.be/hhAri5fl-XI
**21、Animating Pictures with Eulerian Motion Fields
**
論文鏈接:https://arxiv.org/abs/2011.15128
代碼地址:https://eulerian.cs.washington.edu/
視頻解讀:https://youtu.be/KgTa2r7d0I0
**22、CVPR 2021 Best Paper Award: GIRAFFE - Controllable Image Generation
**
論文鏈接:http://www.cvlibs.net/publications/Niemeyer2021CVPR.pdf
代碼地址:https://github.com/autonomousvision/giraffe
視頻解讀:https://youtu.be/JIJkURAkCxM
**23、GitHub Copilot & Codex: Evaluating Large Language Models Trained on Code
**
論文鏈接:https://arxiv.org/pdf/2107.03374.pdf
代碼地址:https://copilot.github.com/
視頻解讀:https://youtu.be/az3oVVkTFB8
24、Apple: Recognizing People in Photos Through Private On-Device Machine Learning
論文鏈接:https://machinelearning.apple.com/research/recognizing-people-photos
視頻解讀:https://youtu.be/LIV-M-gFRFA
25、Image Synthesis and Editing with Stochastic Differential Equations
論文鏈接:https://arxiv.org/pdf/2108.01073.pdf
代碼地址:https://github.com/ermongroup/SDEdit
視頻解讀:https://youtu.be/xoEkSWJSm1k
https://colab.research.google.com/drive/1KkLS53PndXKQpPlS1iK-k1nRQYmlb4aO?usp=sharing
26、Sketch Your Own GAN
論文鏈接:https://arxiv.org/abs/2108.02774
代碼地址:https://github.com/PeterWang512/GANSketching
視頻解讀:https://youtu.be/vz_wEQkTLk0
**27、Tesla's Autopilot Explained
**
在今年8月的特斯拉AI日上,特斯拉AI總監Andrej Karpathy和其他人展示了特斯拉是如何通過八個攝像頭采集圖像,打造了基於視覺的自動駕駛系統。
視頻解讀:https://youtu.be/DTHqgDqkIRw
28、Styleclip: Text-driven manipulation of StyleGAN imagery
論文鏈接:https://arxiv.org/abs/2103.17249
代碼地址:https://github.com/orpatashnik/StyleCLIP
視頻解讀:https://youtu.be/RAXrwPskNso
https://colab.research.google.com/github/orpatashnik/StyleCLIP/blob/main/notebooks/StyleCLIP_global.ipynb
29、TimeLens: Event-based Video Frame Interpolation
論文鏈接:http://rpg.ifi.uzh.ch/docs/CVPR21_Gehrig.pdf
代碼地址:https://github.com/uzh-rpg/rpg_timelens
視頻解讀:https://youtu.be/HWA0yVXYRlk
30、Diverse Generation from a Single Video Made Possible
論文鏈接:https://arxiv.org/abs/2109.08591
代碼地址:https://nivha.github.io/vgpnn/
視頻解讀:https://youtu.be/Uy8yKPEi1dg
31、Skillful Precipitation Nowcasting using Deep Generative Models of Radar
論文鏈接:https://www.nature.com/articles/s41586-021-03854-z
代碼地址:https://github.com/deepmind/deepmind-research/tree/master/nowcasting
視頻解讀:https://youtu.be/dlSIq64psEY
32、The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World Soundtracks
論文鏈接:https://arxiv.org/pdf/2110.09958.pdf
代碼地址:https://cocktail-fork.github.io/
視頻解讀:https://youtu.be/Rpxufqt5r6I
33、ADOP: Approximate Differentiable One-Pixel Point Rendering
論文鏈接:https://arxiv.org/pdf/2110.06635.pdf
代碼地址:https://github.com/darglein/ADOP
視頻解讀:https://youtu.be/Jfph7Vld_Nw
34、(Style)CLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis
CLIPDraw論文鏈接:https://arxiv.org/abs/2106.14843
在線試用:https://colab.research.google.com/github/kvfrans/clipdraw/blob/main/clipdraw.ipynb
StyleCLIPDraw論文鏈接:https://arxiv.org/abs/2111.03133
在線試用:https://colab.research.google.com/github/pschaldenbrand/StyleCLIPDraw/blob/master/Style_ClipDraw.ipynb
視頻解讀:https://youtu.be/5xzcIzHm8Wo
35、SwinIR: Image restoration using swin transformer
論文鏈接:https://arxiv.org/abs/2108.10257
代碼地址:https://github.com/JingyunLiang/SwinIR
視頻解讀:https://youtu.be/GFm3RfrtDoU
https://replicate.ai/jingyunliang/swinir
36、EditGAN: High-Precision Semantic Image Editing
論文鏈接:https://arxiv.org/abs/2111.03186
代碼地址:https://nv-tlabs.github.io/editGAN/
視頻解讀:https://youtu.be/bus4OGyMQec
37、CityNeRF: Building NeRF at City Scale
論文鏈接:https://arxiv.org/pdf/2112.05504.pdf
代碼地址:https://city-super.github.io/citynerf/
視頻解讀:https://youtu.be/swfx0bJMIlY
38、ClipCap: CLIP Prefix for Image Captioning
論文鏈接:https://arxiv.org/abs/2111.09734
代碼地址:https://github.com/rmokady/CLIP_prefix_caption
視頻解讀:https://youtu.be/VQDrmuccWDo
在線試用:https://colab.research.google.com/drive/1tuoAC5F4sC7qid56Z0ap-stR3rwdk0ZV?usp=sharing
當然,博主在整理的過程中也不能保證完美。經網友提醒,這里可以手動添加一項突破性研究:「AlphaFold」。
去年,谷歌旗下人工智能技術公司 DeepMind 宣布深度學習算法「Alphafold」破解了出現五十年之久的蛋白質分子折疊問題。2021年7月,AlphaFold 的論文正式發表在《Nature》雜志上。
論文鏈接:https://www.nature.com/articles/s41586-021-03819-2
這項研究被評為Nature年度技術突破,Alphafold 的締造者之一 John Jumper 也被評為《Nature》2021 年度十大科學人物。DeepMind也已經將他們的預測結果免費開放給公眾。
對於你來說,2021年最令人印象深刻的論文又是哪篇呢?
如果覺得有用,就請分享到朋友圈吧!
最后歡迎大家關注我的微信公眾號:對白的算法屋(duibainotes),跟蹤NLP、推薦系統和對比學習等機器學習領域前沿,日常還會分享我的創業心得和人生感悟。想進一步交流的同學也可以通過公眾號加我的微信,和我一同探討技術問題,謝謝!