-
Through analyzing your training script, it seems to me that the pre-processed visual features you stored on Baidu's cloud storage service have already undergone the project layer, is that correct? In …
-
Hi,
First of all thanks for the project, I love it. The Llava model works well for me, but I think I'm doing something wrong with Mistral and when trying to use GGUF models.
I'm trying to get it…
-
### 🐛 Describe the bug
When I try to train model using torch.distributed.FullyShardedDataParallel, I found that :
when training using single-node multi-gpu (1x8A100), the training speed is normal.…
-
### Question
hi liu and the team, thanks for bringing about llava!
recently i'm trying to reproduce the seond-stage(instruct fine-tuning) of llava-v1.5-13b. i followed the guidance of [LLaVA#visua…
-
# Thank you for sharing the good work!
# I followed "offline_demo.md" to run offline, but website has no respones.
## The terminal shows below. What does line 10 means? What error occurred?
```
$ …
-
Hello all,
Great work, and thank you for providing the source code publically.
I need help with building chamferdist. My system configuration is Ubuntu 22.04, Python = 3.10.13, Pytorch version is…
-
Hi,
I'm playing around with the temperature property when calling a model from the `/chat/completions` API, but I can't figure out how how to get some variance in the responses. I have the temperat…
-
### Describe the issue
Issue:
Command:
```
python -m llava.serve.controller --host 0.0.0.0 --port 10000
```
Log:
```
2023-09-03 13:06:49 | INFO | controller | args: Namespace(host='0.0.0.…
-
I tried the latest ollama commit but still can't get Ollama models to show up in chatbot.
I tried two different setups
1. Debian server with ollama and chatbot-ui 2.0 running locally
2. Vercel…
-
### Describe the bug
KeyError: 'module name can\'t contain ".", got: liuhaotian_llava-v1.5-13b-lora'
I just downloaded the model using the download button. I'm completely new to this repo.
### Is…