-
### Contact Details
db334@duke.edu
### What happened?
I had to use the cosmocc/bin/make rather than Mac xcode command line tools make to build my llamafile. Now, when running I'm getting this error…
-
Thank you very much for your work. I have some questions:
1、In Algorithm 1 Iterative Inversion , an equation is:
But it is not introduced in the paper, how did this E come about?
2、What is the ne…
-
As per the instructions, we were able to merge the base model and finetuned model. But on running eval we get this error:
![image](https://github.com/DLCV-BUAA/TinyLLaVABench/assets/45352897/88ed82…
-
Hi 👋🏻 Do you have any inference examples that I could use?
-
how i use this with ollama locally? i have this list ready to go
ollama list
NAME ID SIZE MODIFIED
hub/stewart/multi-agent:latest 8cc6e95685ac 3.8 GB 10…
-
Hi,
when I try to run an inference with any MoE-LLaVA on a node with 4x A100 I run into an issue with tensor allocation:
I have installed MoE-LLaVA from the latest main commit (188d462)
```py…
-
I want to change the tokenizer so that it can be applied to Korean
I would appreciate it if you could change LLM_PATH and additionally let me know which part of the code should be modified.
-
Hi, have you tested the result for llava_llama version? Would an extra moe stage improve original llava results?
-
Can we add a way to use a local API as llm?
Python code should be:
client = OpenAI(
api_key="",
# Change the API base URL to the local interference API
base_url="http://localhost:1337/v1"…
-
the error:
`[2024-03-15 17:42:38,572] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Loading LLaVA from base model...
Special tokens have been added i…