Currently PyTorch models are just running on the CPU. To make them run on the GPU, I think all you'll need to do is:
1) Set the default datatype to torch.cuda.FloatTensor (via e.g. pf.set_datatype(torch.cuda.float32))
2) Cast your input data to torch.cuda.FloatTensor (either before calling model.fit() or within your model's __call__ method itself via torch.from_numpy(your_numpy_array).cuda())
Should first validate that that gets models running on the GPU (by running nvidia-smi).
And add a section to the user guide "Using the CPU or GPU" which describes how to do this (and also have a section for tensorflow which says that you don't need to worry about it if you're using tensorflow_gpu).
Currently PyTorch models are just running on the CPU. To make them run on the GPU, I think all you'll need to do is:
1) Set the default datatype to
torch.cuda.FloatTensor
(via e.g.pf.set_datatype(torch.cuda.float32)
) 2) Cast your input data totorch.cuda.FloatTensor
(either before callingmodel.fit()
or within your model's__call__
method itself viatorch.from_numpy(your_numpy_array).cuda()
)Should first validate that that gets models running on the GPU (by running
nvidia-smi
).And add a section to the user guide "Using the CPU or GPU" which describes how to do this (and also have a section for tensorflow which says that you don't need to worry about it if you're using
tensorflow_gpu
).