Currently if we use the ddp and the fcmae model for fine-tuning for the virtual staining tasks, there seems to be a 'memory leak'. The solution could be to expose these parameters at the ViscyTrainer.
Using PyTorch Lightning’s CombinedLoader with Distributed Data Parallel (DDP spawns multiple processes (one per GPU) and seems to lead to excessive accumulation in a subset of worker processes. Setting persistent_workers=False restarts the DataLoader workers at the beginning of each epoch, which prevents the accumulation of memory or disk space. There is a performance trade-off here as well as reducing the hardcoded prefetch factor from 4 to 2.
Currently if we use the
ddp
and thefcmae
model for fine-tuning for the virtual staining tasks, there seems to be a 'memory leak'. The solution could be to expose these parameters at theViscyTrainer
.Using PyTorch Lightning’s CombinedLoader with Distributed Data Parallel (DDP spawns multiple processes (one per GPU) and seems to lead to excessive accumulation in a subset of worker processes. Setting
persistent_workers=False
restarts the DataLoader workers at the beginning of each epoch, which prevents the accumulation of memory or disk space. There is a performance trade-off here as well as reducing the hardcoded prefetch factor from 4 to 2.