-
## 问题-1
你好,当我运行脚本`llava_llama3_8b_instruct_qlora_clip_vit_large_p14_336_e1_gpu1_finetune.py`后,对保存后的模型进行格式转换,`.pth` --> `xtuner`格式,文件结构如下:
这个模型结构与开源的模型文件结构不同,这是为什么?
**xtuner/llava-llama-3-8b-v1_1…
-
### System info
GPU: A100
tensorrt 9.3.0.post12.dev1
tensorrt-llm 0.9.0
torch 2.2.2
### Reproduction
```
export MODEL_NAME="llava-1.5-7b-hf"
git clone https://huggingface.co/llava-hf/${MODEL…
-
Hi! Thank you for the contribution with the dataset! Really cool stuff! I was wondering, are you planing to release the code you used to create the dataset?
-
https://github.com/LLaVA-VL/LLaVA-NeXT/blob/inference/docs/LLaVA-NeXT.md
In this example, your code generate double "" in front of "user" for the
prompt_question variable.
Could you check if the…
y-rok updated
2 months ago
-
lava-cli.dir\linkLibs.rsp
C:\w64devkit\bin/ld.exe: C:/w64devkit/bin/../lib/gcc/x86_64-w64-mingw32/13.2.0/../../../../x86_64-w64-mingw32/lib/../lib/libpthread.a(libwinpthread_la-thread.o):thread…
-
### Describe the issue
Issue:
Controller/Webserver wont start
ive tried everything chatgpt and bard suggested, no results.
-pip uninstalling reinstalling llava (wich is just a part of it i gue…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ x ] I am running the latest code. Development is very rapid so there are no tagged versions as …
-
# Expected Behavior
Pass the oneMKL flags to CMAKE_ARGS and installing llama-cpp-python via pip should finish successfully as the flags are supported by llama.cpp:
https://github.com/ggerganov/llama…
-
```
Using cached exceptiongroup-1.2.0-py3-none-any.whl (16 kB)
Building wheels for collected packages: deepspeed, llama-cpp-python, llm-serve, ffmpy
Building wheel for deepspeed (setup.py) ... do…
-
Any chance we could see a variant of each produced with the Llava 1.6 architecture? Thanks