I tried runnig the inference api script and I get the error message that the CUDA device type is not supported.
import torch
from model import Model
model = Model(device = "cuda", dtype = torch.float16)
error message:
File "C:\...\model.py", line 30, in __init__
self.generator = torch.Generator(device = device)
RuntimeError: Device CUDA is not supported for torch.Generator( ) api.
System parameters:
Windows 10 enterprise
Display Adapters: Intel(R) iris(R) XE Graphics and NVIDIA RTX A1000 GPU
32G RAM
I tried runnig the inference api script and I get the error message that the CUDA device type is not supported.
error message:
System parameters: Windows 10 enterprise Display Adapters: Intel(R) iris(R) XE Graphics and NVIDIA RTX A1000 GPU 32G RAM