Jittor / jittor

Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.
https://cg.cs.tsinghua.edu.cn/jittor/
Apache License 2.0
3.08k stars 311 forks source link

data device #20

Open jackmsye opened 4 years ago

jackmsye commented 4 years ago
import jittor as jt
jt.flags.use_cuda = 1
a = jt.float32([1,2,3]*10)
b = jt.float32([4,5,6]*10)
for i in range(1):
    c = a*b
    print(c)
    print(c.data)

currently I use your example. When use_cuda =1, the data is stored in gpu, this is correct. But is there any ways to store the data in cpu even though I specify the use_cuda flag to one. And also, I see there is no information about the device in printinfo. Is there any methods about how to specify the data device?

Gword commented 4 years ago

Thanks for reporting, Jittor uses unified memory to manage cpu and gpu memory and do not need to manually specify the device. When you call a.data, the data will be automatically stored in the cpu.

jackmsye commented 4 years ago

@Gword thx for your answer.

I agree that .data can partly perform the function like .cpu() in pytorch. But is there any functions like .device to perform more options. Sometimes we want to assign gpu id on tensors when training on multi-gpus.

Gword commented 4 years ago

Very sorry, Jittor is currently working on multi-gpus and distributed training, and it is expected to support it next month. If you are interested, please pay attention to our later news. Thank you!

jackmsye commented 4 years ago

@Gword thx for your team's great work. I really appreciate for your code. BTW, I notice the your cudnn's implementation in external/cuda/cudnn/ops/cudnn_conv_op.cc does not use tensor core for fp16, though you have datatype half. I think it is better to add tensor core's support if you want to increase the efficiency.

Gword commented 4 years ago

Thanks for your suggestion, we are working on it.