Closed coreqode closed 2 years ago
I believe torch.utils.data.IterableDataset
is inherited from torch.utils.data.Dataset
so there shouldn’t be a performance discrepancy.
Cache is for reusing the same image for multiple iterations. I was having IO bottlenecks so just cache the image being load and repeatly sample rays in the sample image for multiple iterations. This is not ideal for performance but gave me some speedup during training
Close for now as it seems the question is resolved.
Hi again!!
Is there a reason you have used the
torch.utils.data.IterableDataset
rather than thetorch.utils.data.Dataset
? When I ported the code to the latter, training was much faster, but the results were not good. While in the case of the former (default), training is slower. Further what's the use case ofcache_n_repeat
here.