approved
Label flipping attacks in Federated Learning
Tags
Data and Resources
To access the resources you must log in
-
FederatedLearning-sklearnipynb
Jupyter notebook showing the setting of a federated learning loop using...
The resource: 'FederatedLearning-sklearn' is not accessible as guest user. You must login to access it!
Item URL
https://data.d4science.org/ctlg/ResourceCatalogue/label_flipping_attacks_in_federated_learning |
|
Additional Info
Field | Value |
---|---|
Detailed description | In this experiment, we showcase a federated training loop using scikit-learn to classify MNIST.Additionally, we show a poisoning attack, namely the label flipping attack, in which attackers change the labels of some objective class to a different class in order to make the global model misclassify one or more classes.Finally, we show some defense mechanisms based on the analysis of user-contributed updates, including a distance-based detection metric, Krum, and median aggregation. |
Ethical issues | None identified, we used the public MNIST dataset for tests. |
Group | Ethics and Legality |
Involved Institutions | Universitat Rovira i Virgili |
Involved People | Blanco-Justicia, Alberto, alberto.blanco@urv.cat, orcid.org/0000-0002-1108-8082 |
Involved People | Domingo-Ferrer, Josep, josep.domingo@urv.cat, orcid.org/0000-0001-7213-4962 |
State | Complete |
Thematic Cluster | Privacy Enhancing Technology [PET] |
Thematic Cluster | Visual Analytics [VA] |
system:type | Experiment |
Management Info
Field | Value |
---|---|
Author | Blanco Justicia Alberto |
Maintainer | Blanco Justicia Alberto |
Version | 1 |
Last Updated | 7 July 2023, 11:41 (CEST) |
Created | 14 June 2023, 20:28 (CEST) |