-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
### Describe the bug
I try loading and make infere…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
root@autodl-container-40b74f9912-1ab26877:~# llamafactory-cli env
[2024-11-23 13:16:23,920] [INFO]…
-
这是我的文件结构图,里面已经把模型下载好了
![image](https://github.com/InternLM/InternLM-XComposer/assets/68574922/b9c7397b-dbc3-42c8-85bb-e91b8d591f43)
但我用怕跑的时候报这个错误
/home/shf/anaconda3/envs/llama/bin/python /media/sh…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
root@autodl-container-40b74f9912-1ab26877:~# llamafactory-cli env
[2024-11-23 13:16:23,920] [INFO] [real…
-
使用如下命令启动vllm(readme里描述的版本),并固定住block的数量为2048,每个block size大小为16
```bash
vllm serve /hestia/model/Qwen2-VL-7B-Instruct-AWQ --quantization awq --num-gpu-blocks-override 2048 --port 8002 --served-model-…
-
### Your current environment
vllm-openai/v06.3.1.post-1
### Model Input Dumps
a_request: None, prompt_adapter_request: None.
2024-10-27 23:04:39 INFO 10-27 09:04:39 engine.py:290] Added request ch…
-
Thanks for adding support to VLM.
I was using [this](https://github.com/stanfordnlp/dspy/blob/main/examples/vlm/mmmu.ipynb) notebook.Tried with the `Qwen2-VL-7B-Instruct` and `Llama-3.2-11B-Vision-…
-
Last year I bought a pair of Godox SL-60W LED studio lights #3 and they are _really_ good.
I've used them in a few videos and in the _temporary_ setup in the studio.
The **Godox SL-60W** have **3*…
-
Hey @zucchini-nlp and @NielsRogge👋,
I created a [notebook](https://colab.research.google.com/drive/1DEne3yuCmHKMgvtV3sMxJZQRRkDiLXYB?usp=sharing) for fine-tuning [Llava-OneVision-0.5b-ov-hf](https:…
-
### Question
Hi Haotian,
Your job is great, well done.
I have a some issues that after I use my pruned vicuna LLM as the base model, I was succeed in the phase 1--pretraining.
![8423f0bfebba…