-
### Describe the bug
I am getting issue running below code using ipex-llm:
```
(llm_vision) spandey2@IMU-NEX-ADLP-voice-SUT:~/LLM_Computer_Vision$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.4 L…
-
### 🐛 Describe the bug
```
import torch
assert torch.xpu.is_available(), "Intel XPU is not available"
batch_size = 4
vocab_size = 4
out = torch.randn(batch_size, vocab_size, dtype=to…
-
-
This issue has been fixed in the down stream.
https://github.com/intel/intel-xpu-backend-for-triton/pull/835
Need to upstream the fix to the upstream Triton.
-
### 🐛 Describe the bug
# TL;DR
1. We should use safer data_ptr accessing API. Newer template APIs have additional checks. If possible, use `tensor.mutable_data_ptr()` and `tensor.const_data_ptr(…
-
HW platform: XeonW + 4Arc workstation
docker image: intelanalytics/ipex-llm-serving-xpu:2.1.0b
Serving start commands:
# cat start_Qwen1.5-32B-Chat_serving.sh
#!/bin/bash
model="/llm/models/Qwen1…
-
### Describe the issue
I am trying to run Synthesizing speech by TTS:
https://docs.coqui.ai/en/latest/
I have managed to run the below TTS code on XPU, but it takes 23 seconds , seriously, it tak…
-
## Describe the bug
After setting up the development environment for Meteor Lake, trying to run `npm run dev` fails.
The error is caused by basicsr, which is using a deprecated import for function…
-
torch.xpu.get_device_capability result with a dict, eg. {'max_work_group_size': 1024, 'max_num_sub_groups': 64, 'sub_group_sizes': [16, 32]}. But Triton(llvm-target) expects a “Int“ https://github.com…
etaf updated
3 months ago
-
The Inductor UT `test/inductor/test_triton_heuristics.py:test_artificial_zgrid` that previously skipped was recently enbaled by Pytorch community(https://github.com/pytorch/pytorch/pull/127448). The…
etaf updated
2 months ago