-
-
This is the GEMM Performance features productization umbrella ticket. Before converting this ticket umbrella ticket, please:
- Add the step-by-step GEMM Performance features productization plan here.…
-
https://github.com/intel/intel-xpu-backend-for-triton/pull/1282 added `intel::mangle` in `Mangle.h` introducing simple function name mangling. `TritonGENToLLVMPass.cpp` does not use this, using a per …
-
When I go to use the `generate.py` script, I get the following error:
```bash
python ./generate.py --repo-id-or-model-path 'google/codegemma-7b-it' --prompt 'Write a hello world program in Python'…
-
### Describe the bug
On Windows iGPU, I tried to run LLM inference with `ipex=2.1.30+xpu` and `oneapi=2024.1`, but failed. **Wait for more than 1 hour but still pending at here**
![image](https://…
-
With release 0.3.0, I am unable to get mpi4jax to run. I am using this branch from an Intel-forked mpi4jax: https://github.com/jczaja/mpi4jax/tree/jczaja/xpu-support. This is running on Argonne's Su…
-
Hi,
I have tried to install intel-xpu-backend-for-triton on several machines. But I am not able to get it installed and get it working. These are the following configurations that I have tried.
…
-
I try to run the TTS (English and Multi Language Text-to-Speech) in my PC.
https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/pipeline…
-
@fengyuan14 - The commit https://github.com/intel/torch-xpu-ops/commit/5bf9e0cc768f7a3b13d829118683275f324399f1 muted debug logs of "explicit" CPU fallbacks. This complicated debug for 3d party contri…
-
1. Find all the "pytest.skip()" added by our team. Comment out them.
2. Run all the tests, some of them may pass, some of them may fail.
3. Check failed cases and see if they have already being track…