-
when I use Pyinstaller to package and run the Python demo code, The .exe process exit when
`model = model.to('xpu')`
There is no problem running the demo directly using Python.
anyone know …
-
Hi team,
I want to release the related memory via del model variable after model generate, but it does not work as my expectation.
The demo code is as below,
import torch
import time
import n…
-
`ror occurred when executing IPAdapter:
Could not run 'aten::_upsample_bicubic2d_aa.out' with arguments from the 'XPU' backend. This could be because the operator doesn't exist for this backend, or…
-
### Describe the bug
I tried `torch.linalg.svd` a Max Series GPU using the Intel Devcloud and packages from the `intel` conda channel, and while I cannot reproduce the segfault, the performance on …
-
### Describe the bug
When following README.md to install the GPU version of IPEX:
```
python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https:…
-
On NVIDIA GPUs, there is a relation between `nvidia-smi` and PyTorch, `nvidia-smi`, which is similar to `xpu-smi` is used to detect and monitor GPU telemetry. However, absence of `nvidia-smi` on the …
-
### 🐛 Describe the bug
torchbench_amp_bf16_training
xpu train opacus_cifar10
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benc…
-
### Describe the issue
Similarly to #428, I tried `torch.linalg.eigh` on a Max Series GPU using the Intel Devcloud and packages from the intel conda channel, the performance on XPU is not much bett…
-
### Describe the issue
Title. Once the first pass is complete, subsequent passes can be run faster. Seem to only happen when the device is set to XPU.
XPU:
CPU:
Env…
-
`ZE_FLAT_DEVICE_HIERARCHY=FLAT`
![softmax-performance](https://github.com/user-attachments/assets/a108a666-f100-4ad2-b70c-12f4e5709ab2)
`ZE_FLAT_DEVICE_HIERARCHY=COMPOSITE`
![softmax-performance]…