intel / intel-xpu-backend-for-triton

OpenAI Triton backend for Intel® GPUs
MIT License
124 stars 35 forks source link

Inductor UT break on Triton: error: 'spirv.UGreaterThan' op operand #0 must be 8/16/32/64-bit integer or vector of 8/16/32/64-bit integer values of length 2/3/4/8/16, but got 'i1' #422

Closed etaf closed 7 months ago

etaf commented 7 months ago

Inductor UT break on Triton: error: 'spirv.UGreaterThan' op operand #0 must be 8/16/32/64-bit integer or vector of 8/16/32/64-bit integer values of length 2/3/4/8/16, but got 'i1'

[environment info] Pytorch: Public Pytorch release 2.1: a545ebf33472f165c20b85a0678a633c2cc3ab30

IPEX: internal master: 091e1e2ae2317fd63da6a0d58d6dd73f9ce76a58

Triton: release 2.1: https://github.com/intel/intel-xpu-backend-for-triton/releases/download/v2.1.0/triton-2.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl

[Error log] """ Traceback (most recent call last): File "/home/sdp/miniconda3/envs/xinanlin/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/home/sdp/xinanlin/pytorch/torch/_inductor/codecache.py", line 1269, in _worker_compile kernel.precompile(warm_cache_only_with_cc=cc) File "/home/sdp/xinanlin/pytorch/torch/_inductor/triton_heuristics.py", line 174, in precompile self.launchers = [ File "/home/sdp/xinanlin/pytorch/torch/_inductor/triton_heuristics.py", line 175, in self._precompile_config(c, warm_cache_only_with_cc) File "/home/sdp/xinanlin/ipex/intel_extension_for_pytorch/_inductor/xpu/triton_heuristics.py", line 93, in _precompile_config binary = triton.compile( File "/home/sdp/miniconda3/envs/xinanlin/lib/python3.10/site-packages/triton/compiler/compiler.py", line 476, in compile next_module = compile_kernel(module) File "/home/sdp/miniconda3/envs/xinanlin/lib/python3.10/site-packages/triton/third_party/xpu/init.py", line 381, in lambda src: ttgir_to_spirv(src, extern_libs, arch)) File "/home/sdp/miniconda3/envs/xinanlin/lib/python3.10/site-packages/triton/third_party/xpu/init.py", line 35, in ttgir_to_spirv spirv_code, share_memory_size = _triton.translate_triton_gpu_to_spirv(str(mod), arch) # noqa: E501 RuntimeError: Failed to translate TritonGPU to SPIRV IR. """`

vlad-penkin commented 7 months ago

@etaf please retest with the current llvm-target branch.

If it is not reproducible please close the ticket.