Closed huyphan168 closed 2 years ago
Thanks for sharing this issue.
During the training stage, besides sampling training images and targets (just like conventional supervised training models), we need to do extra for meta-learning: (1) sampling support images and their labels; (2) generating meta-learning labels. This makes the DataLoader much busier than conventional setups.
So to answer your question, yes, I think that the slow DataLoader issue is expected, especially when you have less powerful CPUs. But when I conducted experiments on my own server, I found that after initial iterations, the DataLoader won't be a bottleneck for training efficiency.
Issue closed.
I have run some diagnosis on your code and I found that even using more workers, the dataloader is still slow (not even backward start) Is this slow loading in for samples in dataloader behavior expected in the start of training process?