-
Nightly and released wheels should support both ABI=0 and ABI=1 starting from the end of April.
-
Does the FastChat framework support multiple NPU reasoning? I changed the value of num_gpus to 4, but after he loads the model, it will not distribute it equally to each graphics card.
-
We are having a robot raising PR in a regular base: https://github.com/intel/intel-xpu-backend-for-triton/actions/workflows/auto-update-translator-cid.yml, it uses PAT(personal access token) from my a…
-
`torch.utils.data.DataLoader(pin_memory_device='xpu')` is currently not supported with upstream PyTorch XPU backend. I know that with IPEX this feature was supported. Please, support the feature if it…
-
### 问题确认 Search before asking
- [X] 我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer.
### 请提出你的问题 Please ask your question
‘’‘
λ 3cbd864a9187 /home/PaddleSeg export CUDA…
-
The [Heterogeneous parallel programming with open standards using oneAPI and Data Parallel C++](https://www.w3.org/2020/06/machine-learning-workshop/talks/heterogeneous_parallel_programming_with_open_…
-
HW platform: XeonW + 4Arc workstation
docker image: intelanalytics/ipex-llm-serving-xpu:2.1.0b
Serving start commands:
# cat start_Qwen1.5-32B-Chat_serving.sh
#!/bin/bash
model="/llm/models/Qwen1…
-
### 🐛 Describe the bug
torchbench_bfloat16_training
xpu train squeezenet1_1
E0626 09:48:28.341000 140268361156416 torch/_dynamo/utils.py:1478] RMSE (res-fp64): 0.06469, (ref-fp…
-
### 🚀 The feature, motivation and pitch
Ut case fail because of cpu's nll_loss2d backward.We should try this ut when we implement xpu nll_loss2d op.
### Alternatives
_No response_
### Additional …
-
### 🐛 Describe the bug
TestMathBitsXPU , totally 200 cases got RuntimeError: Double and complex datatype matmul is not supported in oneDNN
ONEDNN_VERBOSE=2 PYTORCH_ENABLE_XPU_FALLBACK=1 PYTORCH_TE…