Open Pari-singh opened 2 months ago
Could you provide more detailed setup? Which dataset did you use and how did you make client data?
I used brandenburg_gate dataset. Used the split as mentioned in the NerF-W website. For the client, I used the code mentioned in this repo for 10 clients. Trained the client models first and then updated global model. Should this be repeated with any updates from global back to local ?
What does "1 client" means? Is it the averaged performance of clients or a model trained on all the training data?
Yeah, 1 client is model trained on all data basically similar to 3DGS in that case.
Fed3DGS is basically inferior to centralized approaches (i.e., your 1 client setting) in terms of the rendering quality. So, it is not a problem if the quality of 10 clients is lower than that of 1 client.
I don't know the details of the brandenburg_gate dataset, but 10 clients may be too small to cover the whole scene. (In our experiments, the performance of Fed3DGS is saturated with 100 clients in Mill 19 dataset.) So, increasing the number of clients may improve the quality.
Hi @perrying thanks for your amazing work! I ran the code for 1 client and 10 clients separately on public datasets. The quantitative and qualitative results differ by huge margin!
Would you know why? The way I performed training was to train all local models first and then update global model (as mentioned in the repo). Do you think that's what's causing the degradation? Should the global model be updated after each local model update? Should local models learn over time from the global model updates?