Open haggleSS opened 1 week ago
Updating the global model requires more than twice as much GPU memory as training the local model.
Updating the global model requires more than twice as much GPU memory as training the local model.
The model being trained is around 1GB in size, but an out-of-memory (OOM) issue occurs during the model aggregation phase. Are there any ways to reduce the usage of GPU memory?
You can reduce GPU memory by reducing the resolution of the rendered images, which can be changed with the --resolution
option (default is 4, which will render images at a quarter of the default resolution defined by the camera intrinsics).
Excuse me, I'm encountering issues when building a global model; I am using a 4090 GPU with 24GB of VRAM; each client has 80 images。 total 7 clients