-
With:
* https://github.com/pytorch/pytorch/commit/cd472bb1e368a711a2bd34d5671c77dab336d312
* https://github.com/intel/torch-xpu-ops/commit/c6981a238cdaf93774b1c6a3550a436f530f4736
* https://github.…
-
### Anything you want to discuss about vllm.
vLLM heavily depends on PyTorch, and also actively works with PyTorch team to leverage their new features. When a new PyTorch version comes out, vLLM usu…
-
### 🐛 Describe the bug
We will use this issue to track the failures caused by precision issue, that could because different compiler or package have different implementations.
- [ ] https://githu…
-
### Describe the issue
I have ARC A770 graphics card on a ubuntu 22.04 host. I am trying to install pytorch and ipex using following command
`python3 -m pip install torch==2.3.1+cxx11.abi torchvisio…
-
GEMM and FlashAttention are ran with a number of env variables, this issue is to minimize them.
As suggested in https://github.com/intel/intel-xpu-backend-for-triton/pull/1877#discussion_r172536119…
-
This issue affects XPU enabling for Huggingface - https://github.com/huggingface/transformers/issues/31237#issuecomment-2148067845. See table in this comment for the list of affected examples and mode…
-
It is essential to keep up to date with OpenAI Triton, to get the latest features, and reduce the difficulty to upstream our changes to OpenAI Triton.
This ticket is continuation of:
- #2244
- #2…
-
The following ops are currently not implemented for XPU backend and affect performance on select models: efficientnet, fbnet, yolov4,, ifrnet and rife:
- [x] `aten::_prelu_kernel` (ifrnet, rife), h…
-
### 🐛 Describe the bug
Details in https://github.com/intel/torch-xpu-ops/actions/runs/10806301704/job/29974962919
- [x] RuntimeError: "im2col_xpu" not implemented for 'Bool'
PYTORCH_TEST_WITH_SLO…
-
Hi, I am running some xpu workload and found that different compute runtime will lead to different xpu memory usage.
When using version https://github.com/intel/compute-runtime/releases/tag/23.17.2…
gc-fu updated
1 month ago