FLAIR-THU / CreamFL

[ICLR 2023] Multimodal Federated Learning via Contrastive Representation Ensemble
https://arxiv.org/abs/2302.08888
84 stars 12 forks source link

Time-Consuming Framework #4

Closed SilviaGrosso closed 12 months ago

SilviaGrosso commented 12 months ago

I am working with 1 GPU, type NVIDIA A40. I tried to lighten the CreamFL framework, first of all by reducing the dimension of the representations to 32 and the number of public data to 10k. Then I also replaced cifar100 with the simpler cifar10 dataset and replaced all architectures with simpler models:

Despite all these changes each round still takes 48 minutes and, of course, metrics values are incredibly low. Also, the convergence of the metrics requires several rounds and is therefore still too time consuming: it makes further experiments impossible.

What else do you recommend doing to lighten each round and be able to reach a stable state of metrics in a reasonable time? Thanks in advance