DensoITLab / Fed3DGS

Official implementation of "Fed3DGS: Scalable 3D Gaussian Splatting with Federated Learning"
Other
86 stars 6 forks source link

Results degrade with clients added #5

Open Pari-singh opened 2 months ago

Pari-singh commented 2 months ago

Hi @perrying thanks for your amazing work! I ran the code for 1 client and 10 clients separately on public datasets. The quantitative and qualitative results differ by huge margin! Screenshot from 2024-07-10 10-14-46

Would you know why? The way I performed training was to train all local models first and then update global model (as mentioned in the repo). Do you think that's what's causing the degradation? Should the global model be updated after each local model update? Should local models learn over time from the global model updates?

perrying commented 2 months ago

Could you provide more detailed setup? Which dataset did you use and how did you make client data?

Pari-singh commented 1 month ago

I used brandenburg_gate dataset. Used the split as mentioned in the NerF-W website. For the client, I used the code mentioned in this repo for 10 clients. Trained the client models first and then updated global model. Should this be repeated with any updates from global back to local ?

perrying commented 1 month ago

What does "1 client" means? Is it the averaged performance of clients or a model trained on all the training data?

Pari-singh commented 1 month ago

Yeah, 1 client is model trained on all data basically similar to 3DGS in that case.

perrying commented 1 month ago

Fed3DGS is basically inferior to centralized approaches (i.e., your 1 client setting) in terms of the rendering quality. So, it is not a problem if the quality of 10 clients is lower than that of 1 client.

I don't know the details of the brandenburg_gate dataset, but 10 clients may be too small to cover the whole scene. (In our experiments, the performance of Fed3DGS is saturated with 100 clients in Mill 19 dataset.) So, increasing the number of clients may improve the quality.