Traceback (most recent call last):
File "/runner/_work/intel-xpu-backend-for-triton/intel-xpu-backend-for-triton/pytorch/test/inductor/test_triton_kernels.py", line 2913, in test_autotune_unbacked
x = torch.randn(M, K, device="cuda")
File "/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/torch/cuda/__init__.py", line 310, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
PT Inductor tests job run - https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/11140161686/job/30958325952
Error message: