Open jackmsye opened 4 years ago
Thanks for reporting, Jittor uses unified memory to manage cpu and gpu memory and do not need to manually specify the device. When you call a.data, the data will be automatically stored in the cpu.
@Gword thx for your answer.
I agree that .data can partly perform the function like .cpu() in pytorch. But is there any functions like .device to perform more options. Sometimes we want to assign gpu id on tensors when training on multi-gpus.
Very sorry, Jittor is currently working on multi-gpus and distributed training, and it is expected to support it next month. If you are interested, please pay attention to our later news. Thank you!
@Gword thx for your team's great work. I really appreciate for your code. BTW, I notice the your cudnn's implementation in external/cuda/cudnn/ops/cudnn_conv_op.cc does not use tensor core for fp16, though you have datatype half. I think it is better to add tensor core's support if you want to increase the efficiency.
Thanks for your suggestion, we are working on it.
currently I use your example. When use_cuda =1, the data is stored in gpu, this is correct. But is there any ways to store the data in cpu even though I specify the use_cuda flag to one. And also, I see there is no information about the device in printinfo. Is there any methods about how to specify the data device?