-
### Your current environment
```text
Versions of relevant libraries:
[pip3] flashinfer==0.0.9+cu121torch2.3
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] sentence-transformers==3.0…
-
Hi! Thank you for your outstanding work!
I have been working on improving the LangBridge approach, and I noticed your paper referenced it. As you discussed, LangBridge uses soft prompts generated b…
-
1) Load the Gemma2 2B model with Unsloth - OK
2) Perform fine tuning - OK
3) Test the resulting model - OK, responses indicate fine tuning is successful
4) Save 16 bit `model.save_pretrained_merged…
-
### What happened?
When using litellm to interact with Ollama models and fallbacks are configured, the fallback mechanism does not function correctly when the stream=True option is used.
**Steps t…
-
Dear MTEB Team,
I noticed that there are some incompatible task names between [github repo](https://github.com/embeddings-benchmark/mteb/tree/main/mteb/tasks) and [leaderboard config](https://githu…
-
### What is the issue?
My initial goal is to check if specific model is available using Ollama API.
I use OpenAI library `github.com/sashabaranov/go-openai` to do that.
The problem is when I …
-
Currently on 3 chat templates is present: https://github.com/TanvirOnGH/vscode-ollama-modelfile/blob/dev/snippets/modelfile.json#L37-L104.
## TODO Templates
- [x] ChatML (ccd461ac30c116110a7adda50…
-
Gemma2 need torch>=2.4.0 as [this mentioned](https://huggingface.co/google/gemma-2-9b/discussions/29#66b1d1c691be75f7264f0b20)
Because when I run it I get this error:
```
File "/usr/local/lib/pyt…
-
### OS
Windows
### GPU Library
CUDA 12.x
### Python version
3.11
### Pytorch version
2.4.1+cu121
### Model
google/gemma-2-27b-it
### Describe the bug
starting approxim…
-
I'm having an issue where I have ollama and llama2 downloaded but I'm getting nowhere with the AI. It gives me the entire conversation spiel but then I try to talk to it and it just gives me an error.…