Open ayushkarnawat opened 4 years ago
Currently, no matter if we use CPU or GPU, we pin memory. Rather, we should only pin memory if performing computations on CUDA cores.
use_cuda = args.cuda and torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") dataloader_kwargs = {'pin_memory': True} if use_cuda else {}
Currently, no matter if we use CPU or GPU, we pin memory. Rather, we should only pin memory if performing computations on CUDA cores.