Closed xihajun closed 2 years ago
Hi Jun,
Thank you for your kind words :-)
You got it absolutely right - once you send data to the GPU, it needs to be copied there. For CPU tensors, the data is stored in the computer's RAM, and it can be accessed by both Numpy and PyTorch, and the underlying data is shared by them.
But, the moment you send data to the GPU, it will be copied to the GPU's RAM, and it won't be shared with Numpy anymore.
Numpy does not support GPUs, which is the reason why we have to use .cpu()
to bring the tensor back to the main RAM before turning it into a Numpy array.
Hope it helps :-) Best, Daniel
Many thanks Daniel! That's really helpful 👍
Best, Jun
Hi Dan,
I love your book and tutorials! May I kindly ask does
to()
method copy the data input the device (GPU or CPU) memory directly?The reason I am asking is that you mentioned before that
torch.as_tensor(x_train)
willshares the underlying data with the original Numpy array
, but when we usedtorch.as_tensor(x_train).to(device)
I found that x_train data won't change.Do I understand it correctly?
Best, Jun