Open kokuro-asahi opened 8 months ago
Keywords | References | link |
---|---|---|
Decentralized Federated Learning | Decentralized Federated Learning: Fundamentals, State of the Art, Frameworks, Trends, and Challenges | 链接 |
Decentralized Federated Learning | Decentralized Federated Learning: A Survey and Perspective | 链接 |
Decentralized Federated Learning | AEDFL: Efficient Asynchronous Decentralized Federated Learning | 链接 |
Poisoning Attacks | SCA: Sybil-Based Collusion Attacks of IIoT Data Poisoning in Federated Learning | 链接 |
Poisoning Attacks | A Manifold Consistency Interpolation Method of Poisoning Attacks Against Semi-Supervised Model | 链接 |
Poisoning Attacks | Dependable federated learning for IoT intrusion detection against poisoning attacks | 链接 |
Poisoning Attacks | A Differentially Private Federated Learning Model Against Poisoning Attacks in Edge Computing | 链接 |
Poisoning Attacks | Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning | 链接 |
Poisoning Attacks | Hiding in Plain Sight: Differential Privacy Noise Exploitation for Evasion-Resilient Localized Poisoning Attacks in Multiagent Reinforcement Learning | 链接 |
Poisoning Attacks | FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling | 链接 |
Poisoning Attacks | Untargeted attack against federated recommendation systems via poisonous item embeddings and the defense | 链接 |
Utility-centric Threats | [Threats, attacks, and defenses to federated learning: issues, taxonomy] | 链接 |
Utility-centric Threats | [Survey on Federated Learning Threats: concepts, taxonomy on attacks and defenses] | 链接 |
Utility-centric Threats | [A Critical Evaluation of Privacy and Security Threats in Federated Learning] | 链接 |
Utility-centric Threats | [Threats, attacks and defenses to federated learning: issues] | 链接 |
Utility-centric Threats | [Federated Learning Vulnerabilities, Threats and Defenses: A Survey] | 链接 |
Utility-centric Threats | [A Detailed Survey on Federated Learning Attacks and Defenses] | 链接 |
Utility-centric Threats | [Security and Privacy Threats to Federated Learning: Issues, Methods, and Challenges] | 链接 |
Model Noise Injection | [On the Inherent Regularization Effects of Noise Injection] | 链接 |
Model Noise Injection | [Improving the robustness of analog deep neural networks] | 链接 |
Model Noise Injection | [Adaptive Gaussian Noise Injection Regularization for Neural Networks] | 链接 |
Model Noise Injection | [Noise Injection as a Probe of Deep Learning Dynamics] | 链接 |
Model Noise Injection | [NICE: Noise Injection and Clamping Estimation for Neural Network Quantization] | 链接 |
Model Noise Injection | [Research on Neural Network Defense Problem Based on Random Noise Injection] | 链接 |
Defending | DeFL: Defending against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness | 链接 |
Defending | Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks | 链接 |
模型替换攻击(Model Replacement Attacks): 通过发送设计的梯度更新来逐步替换目标模型的参数。
模型中毒攻击(Model Poisoning Attacks): 注入恶意梯度,使得整体模型在特定输入上产生错误的预测。用于影响模型对特定数据样本的识别,例如推荐系统。
同步梯度下降攻击(Sign-flipping Attacks): 翻转梯度的符号来干扰模型的训练过程。可以导致训练过程缓慢或模型无法收敛。
标签翻转攻击(Label Flipping Attacks): 数据层面的攻击,,翻转本地标签,影响本地计算的梯度,进而影响全局模型。不需要对整个数据集进行控制,只需操纵部分数据。
差分隐私攻击(Differential Privacy Attacks): 利用模型添加的差分隐私噪声来隐藏恶意更新,更难被检测。
生成对抗网络攻击(GAN-based Attacks): 使用GAN生成对抗性梯度,以此来欺骗模型或使模型训练失败。
Implementation Methods:
Gradient Direction-Based Attack: Formula: (g{attack} = -g{benign} + noise)
Model Fine-tuning-Based Attack: Formula: (g{attack} = \nabla L(model{target}, data_{malicious}))
Implementation Methods:
Label Flipping: Formula: (g{attack} = \nabla L(f(x{malicious}), y_{flip}))
Optimization-Based Attack: Formula: (g{attack} = \arg\max{g} L(model + g, data{benign}) - L(model + g, data{malicious}))
Implementation Methods:
Full Sign Flip: Formula: (g{attack} = -g{benign})
Random Sign Flip: Formula: (g{attack} = (2 \text{random_bit}() - 1) g{benign})
Implementation Methods: Similar to label flipping in model poisoning attacks
Implementation Methods:
Simulating Differential Privacy Noise: Formula: (g{attack} = g{benign} + \text{Laplace}(\mu, b))
Optimization-Based Noise Injection: Formula: (g{attack} = \arg\max{g} L(model + g, data_{malicious}))
Implementation Methods:
GAN-based Gradient Generation: Formula: (g_{attack} = G(z))
Adversarial Sample Generation: Formula: (x_{adv} = x + \epsilon * \text{sign}(\nabla_x L(model, x, y)))
Attack Model/Characteristic | Targeted | Stealthiness | Model Dependency | Data Dependency | Ease of Detection |
---|---|---|---|---|---|
Model Replacement Attack | Possibly | Low | Medium | Low | Low |
Model Poisoning Attack | Possibly | Medium | High | High | Medium |
Sign-flipping Attack | Usually Not | High | Low | Low | High |
Label Flipping Attack | Possibly | Medium | Low | High | Medium |
Differential Privacy Attack | Possibly | High | Low | Low | High |
GAN-based Attack | High | High | High | Medium | Low |
模型替换攻击(Model Replacement Attack): 应对手段: 模型验证和异常检测。
模型中毒攻击(Model Poisoning Attack): 应对手段: 安全多方计算和差分隐私。
同步梯度下降攻击(Sign-flipping Attack): 应对手段: 梯度修剪和异常值检测。
标签翻转攻击(Label Flipping Attack): 应对手段: 数据一致性检查和监督学习。
差分隐私攻击(Differential Privacy Attack): 应对手段: 增强差分隐私机制。
生成对抗网络攻击(GAN-based Attack): 应对手段: 对抗性训练和模型鲁棒性提升。
Krum: 选择距离其余所有更新向量平方距离之和最小的单个更新作为全局更新。如果有 ( n ) 个更新,Krum选择更新 ( i ) 作为 ( \text{argmin}i \sum{j \neq i} ||x_i - x_j||^2 )。适合于少量的恶意攻击。
Mkrum: 在Krum的基础上,选取 ( m ) 个距离最小的更新,然后计算它们的平均值。 因此存在多个恶意更新时更为鲁棒。
Bulyan: 在执行Krum或Mkrum后,执行多个轮次的修剪,计算剩余更新的平均值,最终取这些平均值的中位数或均值。
Median: 对每个参数分别计算所有更新的中位数,即 ( x{\text{median}}[k] = \text{median}(x{1}[k], x{2}[k], ..., x{n}[k]) )
Trimmed Mean (TrMean): 对每个参数分别计算除去最高和最低某个百分比的更新后的均值。
CC: 在聚合前对模型更新的范数进行限制,确保没有任何单一更新对聚合结果产生过大影响。
AFA: 依据数据的实际分布和更新的特性动态调整聚合策略。可以适应不同的攻击模式。
DNC: 在模型更新之前应用范数剪辑,以限制每个更新的影响力。
Title | Attack Target | Attack Type | Defense | Application | Features | Applicability | Attack Performance |
---|---|---|---|---|---|---|---|
Untargeted Attack against FedRec | Performance Degradation | Non-targeted Attack | UNION Mechanism | FedRec Systems | Cluster Poisoning | CFL | High Efficiency |
DoS or Fine-Grained Control | Precise Performance Control | Model Poisoning | - | Federated Learning | Historical Estimation | DFL/CFL | Strong |
Collusive Model Poisoning in DFL | Learning Effect Degradation | Collusive Model Poisoning | Trust and History-based Defense | Decentralized FL | Node Trust and History Data | DFL | Effective |
DoS or Fine-Grained Control | Precise Control of Global Model | Model Poisoning | - | Federated Learning | Fine-grained Control Concept | DFL/CFL | Flexible |
SCA: Sybil-Based Attacks in FL | IIoT Data Poisoning | Sybil-based Collusion Attack | Behavioral Pattern Defense | IIoT in FL | Device Update Behavior Analysis | DFL | Moderate |
Title | Attack Target | Attack Type | Defense | Application | Features | Applicability | Attack Performance |
---|---|---|---|---|---|---|---|
Untargeted Attack against FedRec | Performance Degradation | Non-targeted Attack | UNION Mechanism | FedRec Systems | Cluster Poisoning | CFL | High Efficiency |
DoS or Fine-Grained Control | Precise Performance Control | Model Poisoning | - | Federated Learning | Historical Estimation | DFL/CFL | Strong |
Collusive Model Poisoning in DFL | Learning Effect Degradation | Collusive Model Poisoning | Trust and History-based Defense | Decentralized FL | Node Trust and History Data | DFL | Effective |
Competitive Advantage Attacks to Decentralized Federated Learning | Precise Control of Global Model | Model Poisoning | - | Federated Learning | Fine-grained Control Concept | DFL/CFL | Flexible |
SCA: Sybil-Based Attacks in FL | IIoT Data Poisoning | Sybil-based Collusion Attack | Behavioral Pattern Defense | IIoT in FL | Device Update Behavior Analysis | DFL | Moderate |