Open YoelShoshan opened 4 years ago
It is due to this: https://github.com/rosinality/style-based-gan-pytorch/pull/48/commits/211cb45d5aeac1da1e2f29702f2f89cc94ab6c7c I remember using 1 workers is more efficient for low resolutions.
I see - makes sense since the multi-process communication becomes heavier than the inter process communication overhead, due to the extremely small work that a single worker needs to do in small resolutions. A possible solution is to reinitialize DataLoader with bigger workers_num as the resolution increases but I assume it's not high priority.
Yes. And using distributed training will be even better.
Wondering - is there any reason that num_workers is hardcoded to 1, and there's no usage of more workers?