-
### 🐛 Describe the bug
Log:
```
File "/home/gta/penghuic/pytorch_stock/third_party/torch-xpu-ops/test/xpu/../../../../test/test_content_store.py", line 34, in test_basic
writer.write_tensor(…
-
The Triton tutorial 03-matrix-multiplication.py starts to fail after a recent software update to oneAPI 2024 with 'total scratch space exceeds HW supported limit".
ocloc -spirv_input -file matmul_k…
-
```
Preparing metadata (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
…
-
2024-08-19 11:55:58
Started!
CustomVisionEncoderDecoderModel init
CustomMBartForCausalLM init
CustomMBartDecoder init
[08/19 11:56:11 detectron2]: Rank of current process: 0. World size: 1
[08/1…
-
Modify code generation for 2D block read operation to avoid using multiple address descriptors in a loop:
![image](https://github.com/intel/intel-xpu-backend-for-triton/assets/56368199/4b115b31-ea7…
-
### Describe the issue
**The Issue I am having:**
When attempting to import this library, I am getting the following error:
`import intel_extension_for_pytorch as ipex`
```
--------------------…
-
https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/9456830167:
```
softmax-performance:
N Triton-GB/s XeTLA-GB/s Triton-GB/s-min XeTLA-GB/s-min Triton-GB/s-max XeTLA-…
-
### Describe the bug
I followed the instructions from here:
https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.1.30%2bxpu&os=windows&package=pip
Executing '…
-
LLVM-path based on the latest Triton code(after Triton refactor), while PyTorch (master branch) is using Triton CI: e28a256d71f3cf2bcc7b69d6bda73a9b855e385e ([ci_commit_pins/triton.txt](https://github…
-
### Describe the bug
Try to run pytorch case with IPEX-v2.1.30-xpu and oneAPI2024.1 releases running with MAX-1550.
After setting up the ENV following the guide in https://intel.github.io/intel-exte…