-
I followed the steps from this github link -- https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/GPU/Deepspeed-AutoTP/README.md and attempted to verify 2 GPU inference runs on these…
-
### Describe the bug
Missing aten operator for multi-gpu training.
```
from argparse import Namespace
from timm.models import create_model
from timm import utils
model = create_model(
"…
-
### Describe the issue
While importing torchvision or intel_extension_for_pytorch, the following warning is thrown:
```
E:\Intel\oneAPI\intelpython3\envs\idp\lib\site-packages\torchvision\io\imag…
-
### Describe the issue
When I wanted to try using a graphics card to train my classification model I made changes to the following code
device = 'xpu'
X_tensor = torch.tensor(X_processed, dtype=tor…
-
- CPU Info:
```
Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
```
- OS Info:
```
OS: CentOS Linux release 7.3.1611 (Core)
Kernel: 3.10.0-1160.1.0.el7.x86_64
```
- Finetune Command
```
pytho…
-
after https://github.com/intel/intel-xpu-backend-for-triton/pull/739 got merged
we should then enable the gemm test in CI
-
Hi team, currently the following log always output to the console when FP16 atomic emulation is used:
```
loc("/tmp/tmpxdeq_pc_/a4/ca4mpl5b3diukcjkbi2xfnufaqqobxjwafffr4bsmbyslkroz6pe.py":32:53): er…
etaf updated
3 months ago
-
We got triton crash when runing stock pytroch inductor UT: ` RuntimeError: Triton Error [ZE]: 2013265944`
To reproduce the issue using IPEX:
Triton commit: 9dd5125ae5c4dd2bac023d3c13e82501c6b5f5…
etaf updated
2 months ago
-
# Issue Description
For PyTorch upstreaming support, provide a private branch, include the latest Triton and SPIR-V path Triton backend source code.
Why not use llvm-path code?
A:…
-
Torch does not seem to support torch._six anymore and it has been removed.
Refer - https://github.com/pytorch/pytorch/pull/94709
DeepSpeed still has dependency on it. Example in `runtime/utils.py`…