-
Hi,
first of all, MANY thanks for building this great app for us!
I was able to get an local ollama installation working with sgpt, ccommands and everything are working fine.
However there are so…
-
### What is the issue?
I recently noticed that the Gemma2 model was updated 5 weeks ago, resulting in a new version of gemma2:9b-instruct-fp16:
- Older Version (6 weeks ago): gemma2:9b-inst…
-
Hi, I wanted to ask if the support for Gemma2 models can be added in this binding? Specifically I was trying to load the `gemma2-2b-it` model but wasn't able to because the ModelType enum only support…
-
可以使用https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/llm_embedder/docs/fine-tune.md 这种方式吗?
-
I fine-tune the Gemma2 2B Instruction with BitsAndBytes(int4). It works when test with the transformer.
Then I follow the guide to build the mllm and quantize the model for linux.
But when I test th…
-
https://x.com/huggingface/status/1841730061067563422?s=46&t=Y6UuIHB0Lv0IpmFAjlc2-Q
-
### Command
Ollama run llama3.1
Ollama run gemma2:2b
### Return
Return an error code because the model requires a newer ollama version
¿Could you please update Ollama to the latest version t…
-
### What is the issue?
Compared to 9b, 27b is ridiculously slow. Is it because of the structure?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.49 Pre-release
-
I'm encountering an issue where gradients become NaN during the training of the Gemma2 model with transformers and flash-attn. I used soft-capping for training.
Environment:
transformers @ git+h…
kiddj updated
2 months ago
-
(1) BGE-EN-ICL
(2) BGE-Multilingual-Gemma2
(3) BGE-Reranker-v2.5-Gemma2-Lightweight