-
Hello,
Your project seems really interesting.
I have a question regarding the execution of `sh playground/merlin/clip-large+conv+vicuna-v15-7b/pretrain.sh`.
In the file, it says `--model_name_o…
-
Hello, and thank you for pushing the boundary on speculative generation!
Question 1: In Eagle-1 paper, table 7 reports throughput figures for Vicuna-7B at 1.97x. How exactly was this measured? (GPU…
-
请教下这个模型是不是微调没效果,能否推荐一个模型或者给一个中文数据集测试下。
-
非常感谢你的开源工作,但是在下载你的pretrained model,碰到了bug,因此我下载了llava-7b-lora的model,跟issue5的碰到的一样,测试scienceqa的命令行如下:
```
python -m llava.eval.model_vqa_science \
--model-base /mnt/xiaofeng.zxf/models/vicuna-7b…
-
My Env is:
Windows 10
Python 3.10.6
FastChat-0.2.25
When running worker Err:
(fchat) D:\ML2023\FastChat-0.2.25>python -m fastchat.serve.model_worker --model-path lmsys/vicuna-7b-v1.5
2023-08-25 …
-
Hi!
I cloned the vicuna v0 7b repo given in minigpt4 repository and updated the line 16 in minigpt4.yaml file, then I downloaded the minigpt4(vicuna 7b) and updated the path name in line 11 in minigp…
-
I am trying to convert the weight for `vicuna-7b-v1.5 `in huggingface transformers ( https://huggingface.co/lmsys/vicuna-7b-v1.5 ) to be used with megatron-lm.
I am using `tools/checkpoint/convert.py…
-
![image](https://github.com/TinyLLaVA/TinyLLaVA_Factory/assets/83384577/2aded3bd-4f00-4214-961a-daad18090e37)
hi team,
I am trying to reproduce mof(clip-vit+dinov2-vit) and Vicuna-7b. And i am facin…
-
python gen_model_answer_baseline.py --model-path /data/transformers/vicuna-7b-v1.3 --model-id vicuna-7b-v1.3-0
python gen_model_answer_medusa.py --model-path /data/transformers/medusa_vicuna-7b-v1.…
-
**Describe the bug**
When I use llm-compressor to quantize llava model, but at the begining, it failed. (Unrecognized configuration class: 'transformers.models.llava.configuration_llava.LlavaConfig'…