Open amew0 opened 1 week ago
Hi, I posted an issue 1 month ago about this topic: https://github.com/LLaVA-VL/LLaVA-NeXT/issues/193
By following the script and adapting it to your own checkpoints, you can convert your lmms-lab
into llava-hf
format and perform inference with the huggingface library.
Thank you I will check it out
I want to understand on the use of Model Architecture difference between the author release of
lmms-lab
and the HF team releases onllava-hf
. For the same set of Models does using one over the another has performance difference?And is there any plans to transfer weights trained on one to another. I want this since I want to do
vLLM
inferences but they only support the ones developed byllava-hf