-
Hi, teams, we found an Inductor UT Regression caused by Triton update: NotImplementedError: elapsed_time is not supported by XPUEvent.
This is cause by the commit: https://github.com/intel/intel-xpu…
-
If i use cpu i got the error that [intel-extension-for-pytorch](https://github.com/intel/intel-extension-for-pytorch) is missing
If i use xpu i got this error
```
2024-03-14 06:15:45 2024-03-14 05…
-
## 🚀 Feature
This RFC proposes to add a new user visible 'XPU' device type and the corresponding Python device runtime API to PyTorch.
XPU is a device abstraction for Intel heterogeneous computa…
-
Hi team,
I want to release the related memory via del model variable after model generate, but it does not work as my expectation.
The demo code is as below,
import torch
import time
import n…
-
### 🚀 The feature, motivation and pitch
https://github.com/pytorch/pytorch/blob/76259ebfdd83389eeb5735e76f66fd2ad84a9671/aten/src/ATen/native/AdaptiveAveragePooling.cpp#L120
When output_size == 1, C…
-
# Motivation
As mentioned in [[RFC] Intel GPU Upstreaming](https://github.com/pytorch/pytorch/issues/114723), Intel GPU runtime is the cornerstone to support other features. Technically, to support…
-
When I go to use the `generate.py` script, I get the following error:
```bash
python ./generate.py --repo-id-or-model-path 'google/codegemma-7b-it' --prompt 'Write a hello world program in Python'…
-
gradchecker failed: first compare with cpu, if align with cpu it is low priority.
- [ ] "test_fn_grad_linalg_norm_xpu_complex128", report oneDNN issue Double and complex datatype matmul is not supp…
-
I have setup ipex-llm by following [install ipex-llm for llamacpp]( https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md#1-install-ipex-llm-for-llamacpp…
-
### 🐛 Describe the bug
```
import torch
assert torch.xpu.is_available(), "Intel XPU is not available"
batch_size = 4
vocab_size = 4
# RuntimeError: Required aspect fp64 is not suppor…