intel / intel-xpu-backend-for-triton

OpenAI Triton backend for Intel® GPUs
MIT License
141 stars 43 forks source link

[Pytorch pin update] `e9a55b4` - inductor/test_triton_kernels.py::CustomOpTests::test_autotune_unbacked test failure #2404

Closed vlad-penkin closed 2 weeks ago

vlad-penkin commented 1 month ago

PT Inductor tests job run - https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/11140161686/job/30958325952

Error message:

Traceback (most recent call last):
  File "/runner/_work/intel-xpu-backend-for-triton/intel-xpu-backend-for-triton/pytorch/test/inductor/test_triton_kernels.py", line 2913, in test_autotune_unbacked
    x = torch.randn(M, K, device="cuda")
  File "/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/torch/cuda/__init__.py", line 310, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
anmyachev commented 1 month ago

https://github.com/pytorch/pytorch/pull/137189