Closed dillfrescott closed 10 months ago
@dillfrescott from the log output, looks like you are running the inference code on CPU. Unfortunately PyTorch lacks fp16 support for many CPU-based ops. I suggest running inference on a CUDA device if you want to run in fp16, or use fp32 if you need CPU inference.
Oh okay. Thank you!
You are welcome!
Any ideas?