-
After following the instructions: [Install IPEX-LLM on Linux with Intel GPU](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_linux_gpu.html), and executed the code [Llama3](https:…
-
### Describe the bug
A runtime error occurs when performing int32 matrix multiplication on Arc GPU.
```python
import torch
import intel_extension_for_pytorch as ipex
a = torch.randint(0, 2, (…
-
MACE seems to have [added support](https://github.com/ACEsuit/mace/pull/356) for the "xpu" device (at least for training) via the `intel_extension_for_pytorch` package.
I imagine this could be usef…
-
### 🐛 Describe the bug
float32 dynamic shape cpp wrapper
suite
name
thread
batch_size_new
speed_up_new
inductor_new
eager_new
compila…
-
It is essential to keep up to date with OpenAI Triton, to get the latest features, and reduce the difficulty to upstream our changes to OpenAI Triton.
This ticket is continuation of:
- #1298
- #1…
-
Triggered by running https://github.com/RadeonOpenCompute/rocm_bandwidth_test in a loop while running https://github.com/ROCm-Developer-Tools/HIP-Examples/tree/master/gpu-burn in a loop.
1x 7900XTX…
-
### 🐛 Describe the bug
# addmm.out, addmv.out, addr, linalg_lstsq, linalg_vector_norm.out, norm.out, vdot&dot lack XPU support and fallback to CPU
"test_addmm_sizes_xpu_complex128",
"tes…
-
### Issue Description
Even though I haven't set anything, images are output as if they had a white filter applied to it.
My system
- Windows 11 Pro 23H2
- WSL 2.3.17.0, Ubuntu 22.04.4
- Intel…
-
Merging https://github.com/intel/intel-xpu-backend-for-triton/commit/e4c91aeb43cbc9743272c19002901c37087b7370 causes two matmul regression:
```
=========================== short test summary info ==…
-
There are 15 cases in test_matmul failed on PVC with the LLVM optimization O3 on Triton side.
https://github.com/intel/intel-xpu-backend-for-triton/blob/0085bc91c7b0ecad9c98c1ad68e7dfcc1d359d0d/third…