the tutorial would return an error in the inference part in case the model has been trained on a cuda device.
Now the device for inference is fixed to cpu
PRs with notebooks are usually very cluttered. I cleared all the outputs to avoid this in the future.
the tutorial would return an error in the inference part in case the model has been trained on a cuda device. Now the device for inference is fixed to cpu
PRs with notebooks are usually very cluttered. I cleared all the outputs to avoid this in the future.