Closed SKRohit closed 3 years ago
Indeed it is a bug - we can push a fix in the next release. Thanks for bringing it to our attention.
In the meanwhile maybe try changing some of the files in the pytorch trainer to see if you can change the behavior?
@htahir1 the bug is occurring because of self.test_fn()
in FeedForwardTrainer
in file torch_ff_trainer.py
. It is called in line 213 from self.run_fn()
where the model is being explicitly kept on GPU (if a GPU is present) before training. This model is being passed to self.test_fn
. And while testing the model the inputs should have been kept on GPU (see line no 131) or model should have been moved to CPU.
I have created a pull request to fix this that you can find here
Thanks - left a review!
Thank you @SKRohit for the PR #91 ! It fixes this issue
Describe the bug I am new to zenml and planning to use in one of our project. I tried to run Pytorch examples mentioned here. Please let me know what is the issue. I am confused because it is able to train the model without CPU, GPU tensor mismatch. But after training I am getting this error and I cannot find an option (in the apis) to specify an option to use or not use GPU. Please let me know if you need any other details.
To Reproduce Steps to reproduce the behavior:
Stack Trace
RuntimeError Traceback (most recent call last)