-
### Your current environment
The output of `python collect_env.py`
```text
模型:Qwen2.5-72B-Instruct
vllm版本:0.5.5
机器:L40 8卡
输入:15000token
输出:15000token
并发:5
出现报错:
INFO: Shutting down
…
-
### Your current environment
```text
The output of `python collect_env.py`
```
### 🐛 Describe the bug
command to start vllm:
```yaml
command: ["python3", "-m", "vllm.entrypoints.opena…
-
Hello, there. I am using the newly released torchrec for our model training. I use VBE to reduce the data deduplication in the embedding lookup and communication in the forward pass.
Specifically, …
-
### Your current environment
hardwark: A800
Driver Version: 535.54.03 CUDA Version: 12.2
vllm commit d3a245138acb358c7e1e5c5dcf4dcb3c2b48c8ff
model qwen72B
### Model Input Dumps
_No response…
-
I trained an allegro model using nequip-train and compiled it using nequip-deploy
The model was trained with the dtypes:
```
default_dtype: float64
model_dtype: float32
allow_tf32: true
```
…
-
Can you tell me the version of libtorch you are using? Thank you very much
-
Hey! I trying to do some task with t5 model. But the issue is that I can't put 600mb of libtorch in my project.
Question: Is it possible in this lib to use onnx without downloading libtorch?
Thanks!
-
我在对ns3.41构建完成后,在/ns-allinone-3.41/ns-3.41/cmake-cache文件夹中生成了ns3Config.cmake文件,文件内容如下:
```
####### Expanded from @PACKAGE_INIT@ by configure_package_config_file() #######
####### Any changes to th…
-
@kohya-ss When I fine-tune Flux with 18,000 images, after caching the latents, the following error occurs. What could be the problem?Is this a bug, or is it because the data is too large, making cachi…
-
Hi there,
In the documentation, it says that libtorch must be on the system for the crate to work.
I am creating a program. Does this mean that libtorch must be on every system that I run the pr…