-
oneCCL commit: 5e7c7b7e33f5f679cb82547c4f7e49623ff0ab09
build: cmake .. -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DCOMPUTE_BACKEND=dpcpp
run command: examples/sycl$ mpirun -n 2 ./sycl_allr…
-
### Describe the bug
Original issue: https://github.com/oneapi-src/oneDNN/issues/1611
I am compiling pytorch from source according to the instructions from
https://intel.github.io/intel-extens…
-
Failed to run intel-extension-for-tensorflow/tree/main/examples/train_maskrcnn on PVC (GPU Max 1550).
Errors below showing "Can not found any devices." and " Failed precondition: No visible XPU devic…
-
the TGI image with label "text-generation-inference:latest-intel-cpu" bring up failed with "Intel/neural-chat-7b-v3-3" after the image upgraded to the build of "Created": "2024-08-20T20:17:15.74262894…
-
I've encountered this issue when trying to build a chatbot using a python file, here's my code, copied from jupyter notebook:
```python
from intel_extension_for_transformers.neural_chat import Pipel…
-
**Environment:**
1. Framework: TensorFlow
2. Framework version: 2.4
3. Horovod version: 0.20.0
4. MPI version:
5. CUDA version: N/A
6. NCCL version: N/A
7. Python version: 3.7
8. Spark / PySp…
-
### Describe the bug
Repeated calls into `torch.dist.reduce_scatter_tensor` eventually raise a
`ZE_RESULT_ERROR_OUT_OF_DEVICE_MEMORY` error in multi-node setups. Similar behavior is found when
usin…
-
We have seen a significant difference in performance drop with the env created with the latest repo for vllm serving for the neural-chat model as compared to the old env built with the old repo. With …
-
### Describe the bug
I got this runtime error while infering speech with xpu. without the device param works fine with cpu or nvidia gpu on Colab. Please Feel Free to check my notebook.
chat = C…
-
I am trying to run Synthesizing speech by TTS:
https://docs.coqui.ai/en/latest/
(llm) spandey2@imu-nex-sprx92-max1-sut:~/1worldsync_finetuning$ cat tts.py
```
import torch
from TTS.api import…