-
I am conducting the instruction tuning of llama3_llava using the script on my own dataset
`NPROC_PER_NODE=${GPU_NUM} xtuner train llava_llama3_8b_instruct_full_clip_vit_large_p14_336_lora_e1_gpu8_f…
-
I successfully used groq, now I want to use ollama, I don't know how to configure ollama in keys.ts
``` ollama: 'http://localhost:11434', ``` didn't work.
-
Hi @dusty-nv, thanks for this amazing library! We're using it in a cool art project for Burning Man :-)
I tested the new llava 1.6 (specifically https://huggingface.co/lmms-lab/llama3-llava-next-8b…
-
### feature
Could you please support Llama3 in Llava ?
-
### Describe the issue
When will the llava-1.6 training dataset and training code be open-sourced?
Hello, I'm glad to see that the performance of llava-1.6 has improved so significantly. I believe i…
-
2024/05/24 22:12:15 - mmengine - INFO -
------------------------------------------------------------
System environment:
sys.platform: linux
Python: 3.12.3 (main, Apr 10 2024, 05:33:47) […
-
1. **support plan**
when it will release the version for supporting llava-llama3-70b?
meainwhile, will it will consider of supporting unofficial version like, using llm of llama3-120b?
huggin…
-
Thanks for your great job. When will the training code open sourced?
-
Hi, thanks for your great work. I am reproducing the evaluation results with the latest codebase and also the latest LLaVA codebase. The results of other benchmarks are matched or have minor differenc…
-
### What is the issue?
Happened few times when same chat log (history) was used after local model was switched.
For example chat was started with "llava:7b-v1.6" when switched to "llama3.1:latest" w…