-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.9.1.dev0
- Platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
- P…
-
I tried to load the model with transformers.AutoModel.from_pretrained, but I got this error:
```
Exception has occurred: KeyError (note: full exception trace is shown but execution is paused a…
-
i am unable to load the model , provide me a code to load the model and use the model for video inference locally i mean i want to use in pycharm please
-
this is a python script there are some errors id like you to 1 write all the errors you find a in list and 2 rewrite the code to be correct
"""
def calculate_area(radius):
pi = 3.14
area = pi * …
-
When running the script of LLava-next-72b in README, an error appear.
args_dict = {k: handle_arg_string(v) for k, v in [arg.split("=") for arg in arg_list]}
ValueError: too many values to unpack…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.9.1.dev0
- Platform: Linux-4.19.91-014.15-kangaroo.alios7.x86_64-x86_64-with…
-
notice that in paper, the average score on Politics of llava-1.5-7b is 6.03. However,I get 8.06. i follow the official code in llava to get the model generation and eval it by gp 4v.my code is below,…
-
Traceback (most recent call last):
File "/home/LLM/videoxl/videoxl/infer.py", line 17, in
tokenizer, model, image_processor, _ = load_pretrained_model(model_path, None, "llava_qwen", device_m…
-
The model I am using is **Llama3-Llava-Next-8b**, and I am using a local checkpoint.
The registered model is as follows:
`
register_model(
model_id="llama3-llava-next-8b",
model_family_id…
-
Hi. Thanks for the great work. I tried to prepend and just add the
```
import transformers
from llava.cca_utils.cca import llamaforcausallm_forward, cca_forward
transformers.models.llama.LlamaFo…