-
It started with wanting to backup downloaded models before reinstalling linux (I wanted to upgrade my CUDA version and a clean install) and then I had no idea where the models were saved and then I ha…
-
## 🐛 Bug
## To Reproduce
Using this model [Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)
I got some bugs, need your help !!!
For phi3-v problem, w…
-
Due to my limited resources, I have found it challenging to work with Llama3 and other versions of Llava.I have recently trained a Llava-phi3 model from https://github.com/mbzuai-oryx/LLaVA-pp. I am i…
-
Please let us know what model architectures you would like to be added!
**Up to date todo list below. Please feel free to contribute any model, a PR without device mapping, ISQ, etc. will still be …
-
Have you tried conducting experiments based on llava-v1.5-7b? What were the results like?
-
**Describe the bug**
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
```
swift infer --model_type internvl2-8b-awq --infer_backend lmdeploy
```
```
WARNING:ro…
-
2 new models released from Microsoft:
https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/
https://huggingface.co/microsoft/Phi-3-small-8k-instruct/
Medium uses Phi3ForCausalLM and conv…
-
# `generate` 🤜 🤛 `torch.compile`
This issue is a tracker of the compatibility between `.generate` and `torch.compile` ([intro docs by pytorch](https://pytorch.org/tutorials/intermediate/torch_comp…
-
Running this script:
```python
import mlx.core as mx
from mlx_vlm import load, generate
import os
from pathlib import Path
# model_path = "mlx-community/llava-1.5-7b-4bit"
#model_path = "…
-
Is there a way or tutorial on how to configure ollama litellm to work with skyvern? How can skyvern work with a local llm?