We are using the function KLDivLoss from pytorch and we want to use this on Intel's GPU Max 1550 GPU. Unfortunately, it keeps falling back to run the function on the CPU.
The warning message that I keep seeing is
/home/jaytong/intel-xpu-backend-for-triton/.venv/lib/python3.10/site-packages/torch/nn/functional.py:3391: UserWarning: Aten Op fallback from XPU to CPU happends. This may have performance implications. If need debug the fallback ops please set environment variable `PYTORCH_DEBUG_XPU_FALLBACK=1`
Greetings
We are using the function
KLDivLoss
from pytorch and we want to use this on Intel's GPU Max 1550 GPU. Unfortunately, it keeps falling back to run the function on the CPU.The warning message that I keep seeing is
Here is the reproducer code:
My environment is the following:
I also tried with different tensor sizes with no success. Is this function not supported on GPU?