-
### 🐛 Describe the bug
Passing a complex number to torch.asin returns incorrect output on cpu, even though the gpu output matches that of numpy ([repro in colab](https://colab.research.google.com/dri…
-
### Your current environment
启动方式:python -m vllm.entrypoints.openai.api_server --model /opt/llm_models/Qwen1.5-32B-Chat-GPTQ-Int4 --quantization gptq --max-model-len 16384 --port 8888 --gpu-memory-ut…
-
Hello,
We have been testing to upgrade from PCluster 3.8.0 to 3.11.0 and noticed some differences that impact performance after extensive testing of our applications. We run hybrid MPI-openMP appli…
-
### 🐛 Describe the bug
When I ran large training using the new `decode_image` I noticed increased memory usage on directories with webp files until the process crashed.
Later I try the following:
…
-
https://tone.aliyun-inc.com/ws/xesljfzh/test_result/398270
【环境准备】
```
BINARY_URL=oss://dragonwell/21.0.5.0.5+9-test-dragonwell_extended/Alibaba_Dragonwell_Extended_21.0.5.0.5.9_x64_linux.tar.g…
-
### Version
Microsoft Windows [Version 10.0.22621.674]
### WSL Version
- [X] WSL 2
- [X] WSL 1
### Kernel Version
5.10.16.3-microsoft-standard-WSL2
### Distro Version
Ubuntu 22.04.1 LTS
### Ot…
-
### Your current environment information
ibibverbs not available, ibv_fork_init skipped
Collecting environment information...
PyTorch version: 2.1.1+cu121
Is debug build: False
CUDA used to build…
-
### 🐛 Describe the bug
I have a small script to reproduce how a toy model and the following three features lead to an error when combined:
1. torch.compile
2. FSDP1 with cpu offloading
3. PyTorch …
-
### 🐛 Describe the bug
I have tried to load llava.pte and tokenizer.bin generated from `python -m executorch.examples.models.llava.export_llava --pte-name llava.pte --with-artifacts` in LlamaDemo, …
-
With all the 79X0X3D processors out there, is it possible to update the setup guide with some examples on how to:
- Prevent Linux Kernel from running on the V-Cache cores
- Setup isolation and pinni…