-
-
Hello,
I get similar questions as this issue below
https://github.com/intel/intel-extension-for-tensorflow/issues/51
this is report
```
(venv) # python -c "import intel_extension_for_tensorflow…
-
The original BLIP2-OPT-6.7B model takes more than 30GB RAM to load and convert. So I want to save the compressed model then load it directly from another PC with limited RAM. The saving succeeded. But…
-
I'm trying to run DDP on 5 3090s. I'm running `OMP_NUM_THREADS=4 WORLD_SIZE=5 torchrun --nproc_per_node=5 --master_port=1234 finetune.py` I'm getting the following error:
'''
Map: 81%|███████████…
-
### 🐛 Describe the bug
torchbench_amp_fp16_training
xpu train torch_multimodal_clip
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dyna…
-
### 🐛 Describe the bug
# RuntimeError: Double and complex datatype matmul is not supported in oneDNN
"test_comprehensive_nn_functional_linear_xpu_bfloat16",
"test_comprehensive_nn_funct…
-
With the xpu support landed in huggingface (use https://github.com/huggingface/accelerate/commit/3b5a00e048f4393398d8ea8c4f468857f595f039 and https://github.com/huggingface/transformers/commit/eed9ed6…
-
### 问题描述 Issue Description
# 在xpu上使用Paddle编译出来的inference的包编译Paddle-Inference-Demo推理文件的时候出现如下错误:
```
1. check paddle_inference exists
2. check CMakeLists exists
3. compile
4. cmake
CMake D…
-
### System Info
```Shell
- `Accelerate` version: 0.33.0
- Platform: Linux-4.18.0-348.el8.0.2.x86_64-x86_64-with-glibc2.28
- `accelerate` bash location: /mntcephfs/lab_data/taiyunpeng/.conda/envs/gp…
-
when I use Pyinstaller to package and run a demo, The .exe process exit at code
`model = model.to('xpu')`
without any error report. The demo use intel-llm, torch and so on.
There is no p…