Open NicolasHug opened 2 years ago
Thank you for opening the issue. It's kind easy to fix but need to consider all the use cases.
If users don't specify sharding_filter
in the pipeline, the length should be len(dataset) * num_GPUs // batch_size
.
I do want to understand when you need the size of dataloader
? is this related to the meta data for each Dataset?
If users don't specify sharding_filter in the pipeline, the length should be len(dataset) * num_GPUs // batch_size.
I agree. Interestingly, with map-style datasets, len(dataset)
is equal to len(dataset) // batch_size
if users don't pass sampler=DistributedSampler()
, which is equivalent to not calling .sharding_filter()
. But I think len(dataset) * num_GPUs // batch_size
as you proposed makes more sense.
I do want to understand when you need the size of dataloader?
We rely on the size for our logger, which is how I found out about the discrepancy:
https://github.com/pytorch/vision/blob/59c4de9123eb1d39bb700f7ae7780fb9c7217910/references/classification/train.py#L25 https://github.com/pytorch/vision/blob/59c4de9123eb1d39bb700f7ae7780fb9c7217910/references/classification/utils.py#L109
is this related to the meta data for each Dataset?
No, not directly. But I'm still looking into convenient ways to specify the length of the torchvision datapipes. I'll definitely come back to you on this when this is clearer for me.
In a distributed setting,
len(dataloader)
will return:len(dataset) // (batch_size * num_GPUs)
ifdataset
is a map-style datasetlen(dataset) // batch_size
ifdataset
is a datapipeThis discrepancy makes it a bit difficult to work with torchvision's training recipes, where we often need the size of the dataloader.
Below is an illustration of this discrepancy - you can run the snippet (even without a GPU) with
torchrun --nproc_per_node 4 script.py