Advances and Open Problems in Federated Learning


挖個大坑,等有空了再回來填。心心念念的大綜述呀(吐血三升)!

鄭重聲明:原文參見標題,如有侵權,請聯系作者,將會撤銷發布!

項目地址:https://github.com/open-intelligence/federated-learning-chinese

具體內容參見項目地址,歡迎大家在項目的issue上提出問題!!!

 

 

 

Abstract

  聯邦學習(FL)是一種機器學習環境,其中許多客戶端(如移動設備或整個組織)在中央服務器(如服務提供商)的協調下協同訓練模型,同時保持訓練數據去中心化。FL體現了集中數據收集和最小化的原則,可以減輕傳統的中心化機器學習和數據科學方法帶來的許多系統隱私風險和成本。在FL研究爆炸式增長的推動下,本文討論了近年來的進展,提出了大量的開放性問題和挑戰。

 

Contents

1 Introduction

  1.1 The Cross-Device Federated Learning Setting

    1.1.1 The Lifecycle of a Model in Federated Learning

    1.1.2 A Typical Federated Training Process

  1.2 Federated Learning Research

  1.3 Organization

2 Relaxing the Core FL Assumptions: Applications to Emerging Settings and Scenarios

  2.1 Fully Decentralized / Peer-to-Peer Distributed Learning

    2.1.1 Algorithmic Challenges

    2.1.2 Practical Challenges

  2.2 Cross-Silo Federated Learning

  2.3 Split Learning

3 Improving Efficiency and Effectiveness

  3.1 Non-IID Data in Federated Learning

    3.1.1 Strategies for Dealing with Non-IID Data

  3.2 Optimization Algorithms for Federated Learning

    3.2.1 Optimization Algorithms and Convergence Rates for IID Datasets

    3.2.2 Optimization Algorithms and Convergence Rates for Non-IID Datasets

  3.3 Multi-Task Learning, Personalization, and Meta-Learning

    3.3.1 Personalization via Featurization

    3.3.2 Multi-Task Learning

    3.3.3 Local Fine Tuning and Meta-Learning

    3.3.4 When is a Global FL-trained Model Better?

  3.4 Adapting ML Workflows for Federated Learning

    3.4.1 Hyperparameter Tuning

    3.4.2 Neural Architecture Design

    3.4.3 Debugging and Interpretability for FL

  3.5 Communication and Compression

  3.6 Application To More Types of Machine Learning Problems and Models

4 Preserving the Privacy of User Data

  4.1 Actors, Threat Models, and Privacy in Depth

  4.2 Tools and Technologies

    4.2.1 Secure Computations

    4.2.2 Privacy-Preserving Disclosures

    4.2.3 Verifiability

  4.3 Protections Against External Malicious Actors

    4.3.1 Auditing the Iterates and Final Model

    4.3.2 Training with Central Differential Privacy

    4.3.3 Concealing the Iterates

    4.3.4 Repeated Analyses over Evolving Data

    4.3.5 Preventing Model Theft and Misuse

  4.4 Protections Against an Adversarial Server

    4.4.1 Challenges: Communication Channels, Sybil Attacks, and Selection

    4.4.2 Limitations of Existing Solutions

    4.4.3 Training with Distributed Differential Privacy

    4.4.4 Preserving Privacy While Training Sub-Models

  4.5 User Perception

    4.5.1 Understanding Privacy Needs for Particular Analysis Tasks

    4.5.2 Behavioral Research to Elicit Privacy Preferences

5 Robustness to Attacks and Failures

  5.1 Adversarial Attacks on Model Performance

    5.1.1 Goals and Capabilities of an Adversary

    5.1.2 Model Update Poisoning

    5.1.3 Data Poisoning Attacks

    5.1.4 Inference-Time Evasion Attacks

    5.1.5 Defensive Capabilities from Privacy Guarantees

  5.2 Non-Malicious Failure Modes

  5.3 Exploring the Tension between Privacy and Robustness

6 Ensuring Fairness and Addressing Sources of Bias

  6.1 Bias in Training Data

  6.2 Fairness Without Access to Sensitive Attributes

  6.3 Fairness, Privacy, and Robustness

  6.4 Leveraging Federation to Improve Model Diversity

  6.5 Federated Fairness: New Opportunities and Challenges

7 Concluding Remarks

A Software and Datasets for Federated Learning


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM