-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.9.1.dev0
- Platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.35
- Pytho…
-
### 🚀 The feature, motivation and pitch
Now vLLM gemma2 does not support ROPE scaling, and I sincerely hope that support for it will be added in the future.
-
Show how the model is downloading in a better visual way possible
![Screenshot_20240607-094845~3](https://github.com/Jeffser/Alpaca/assets/162728301/4ed788dd-02c1-454b-bf88-a8361323058a)
This is a …
-
### Your current environment
vllm version: 0.6.3.post1
gpu type: Quadro RTX 4000
### Model Input Dumps
_No response_
### 🐛 Describe the bug
I try to use gamme series such as `google/gemma-2-27…
-
Right now GraphRAG only natively supports models hosted by OpenAI and Azure. Many users would like to run additional models, including alternate APIs, SLMs, or models running locally. As a research te…
-
Not sure if this belongs in ollama-python or here, but I'll open it here. Could you add a way to use function calling on any model, or is this something that the model itself has to support?
-
I'm only seeing 4-5 models in my dropdown, despite having many:
```
$ ollama ls
NAME ID SIZE MODIFIED
dtweet:latest …
-
-
### What is the issue?
"Using fastgpt --onapi to call local Ollama models, I have downloaded several multimodal models, but the image recognition accuracy is not good. Is there a better model that …
-
...zodat ik ook in de toekomst de duurzame toegankelijk van zaken kan waarborgen
[20181216_Voorstel_GEMMA_Architectuur_Duurzame_Toegankelijkheid.pdf](https://github.com/VNG-Realisatie/gemma-zaken/fil…
hdksi updated
8 months ago