AI-secure / FLBenchmark-toolkit

Federated Learning Framework Benchmark (UniFed)
https://unifedbenchmark.github.io/
Apache License 2.0
47 stars 5 forks source link
benchmark federated-learning machine-learning

UniFed: All-In-One Federated Learning Platform to Unify Open-Source Frameworks

🌟For the benchmark resultπŸ“Š, please check our website.πŸ‘ˆπŸ‘ˆπŸ‘ˆ

image

Evaluation Scenarios

Cross-device horizontal

Scenario name Modality Task type Performance metrics Client number Sample number
celeba Image Binary Classification
(Smiling vs. Not smiling)
Accuracy 894 20,028
femnist Image Multiclass Classification
(62 classes)
Accuracy 178 40,203
reddit Text Next-word Prediction Accuracy 813 27,738

Cross-silo horizontal

Scenario name Modality Task type Performance metrics Client number Sample number
breast_horizontal Medical Binary Classification AUC 2 569
default_credit_horizontal Tabular Binary Classification AUC 2 22,000
give_credit_horizontal Tabular Binary Classification AUC 2 150,000
student_horizontal Tabular Regression
(Grade Estimation)
MSE 2 395
vehicle_scale_horizontal Image Multiclass Classification
(4 classes)
Accuracy 2 846

Cross-silo vertical

Scenario name Modality Task type Performance metrics Vertical split details
breast_vertical Medical Binary Classification AUC A: 10 features 1 label
B: 20 features
default_credit_vertical Tabular Binary Classification AUC A: 13 features 1 label
B: 10 features
dvisits_vertical Tabular Regression
(Number of consultations Estimation)
MSE A: 3 features 1 label
B: 9 features
give_credit_vertical Tabular Binary Classification AUC A: 5 features 1 label
B: 5 features
motor_vertical Sensor data Regression
(Temperature Estimation)
MSE A: 4 features 1 label
B: 7 features
student_vertical Tabular Regression
(Grade Estimation)
MSE A: 6 features 1 label
B: 7 features
vehicle_scale_vertical Image Multiclass Classification
(4 classes)
Accuracy A: 9 features 1 label
B: 9 features

Installation

Requirements

Install From PyPI

pip install flbenchmark colink

Launch a benchmark (auto-deployment on AWS with a controller)

We highly recommend to use cloud servers(e.g. AWS) to run the benchmark. Here we provide an auto-deploy script on AWS.

Set up the controller server

Launch servers

Set up the framework operator

Start an evaluation

Launch a benchmark (manual deployment)

Alternatively, you could also manually deploy on your own cluster. You need to prepare enough Ubuntu 20.04 LTS servers based on the needs of the benchmark, and you should set up the environment on these servers. Here we provide one script to set up the environment.

Set up servers

Set up the framework operator

Start an evaluation

License

This project is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.