-
While I pulled already llama2:7b , I wanted to install llama2 (without the 7b tag). My understanding was that it was the same exact model (same hash), so maybe ollama would install only the metadata f…
-
Hey unsloth team, beautiful work being done here.
I am the author of [MachinaScript for Robots](https://github.com/babycommando/machinascript-for-robots) - a framework for building LLM-powered robo…
-
I try to load pretrained pth of llava: hub/llava-phi-3-mini-pth/model.pth. And I got this strange error:
- used deepspeed zerospeed3 and flash-attn.
```
RuntimeError: Error(s) in loading state_…
-
### Is your feature request related to a problem? Please describe.
Now that we have the Phi-3 SLM flagship family (including vision) from Microsoft, it would make more than sense to officially and fu…
-
https://huggingface.co/Qwen/Qwen-VL-Chat/tree/main
https://huggingface.co/deepseek-ai/deepseek-vl-7b-chat
I've gotten extremely good results off of these, would be great to have them baseline in…
-
This is an issue to collect requests for model abliterations.
No one is required to abliterate your request, but it does make for a good place to check if someone else has used this process on the…
-
### feature
Could you please support Llama3 in Llava ?
-
I finetuned llava-phi3 model with lora, but when I try to convert the resulting weight, an error occurred.
This I my command:
xtuner convert pth_to_hf ./my_configs/llava_phi3_mini_qlora_clip_vit_l…
-
### feature
Could you please support Llama3 in Llava ?
-
hi, can I knew the exact syntax, mine is still error:
```python
model = dict(
freeze_llm=True,
freeze_visual_encoder=True,
llm=dict(
attn_implementation='eager',
p…