-
Hi team, I am opening this issue to request support for the Google Gemma 2 models.
Recently, Google released two models: google/gemma-2-27b and google/gemma-2-9b. For an initial trial, we attempted…
-
gemma2 support
👉👉👉[我的哔哩哔哩频道](https://space.bilibili.com/3493277319825652)
👉👉👉[我的YouTube频道](https://www.youtube.com/@AIsuperdomain)
win4r updated
2 weeks ago
-
Gemma2 is incredible model. With AQLM, it will fit into 12GB GPU.
-
Hi guys, first of all many thanks for this project. This has allowed me to finetune some models, which I have not been able to do using other alternatives. Now I wanted to finetune gemma2, which seems…
-
Hello, when gemma2 models will be supported?
![obraz](https://github.com/lmstudio-ai/lmstudio-bug-tracker/assets/22175646/4159a7bd-163d-4be3-82a2-1553bbecaa5b)
-
可以加入这个gemma2的微调吗
https://youtu.be/N8yBqEZvvsU
win4r updated
2 weeks ago
-
```
(pytorch) root@DESKTOP-RDS3VMA:~/work/gemma2# python3 builder.py -m google/gemma-2-27b-it -o ~/work/gemma2/gemma2onnx -p fp16 -e cuda -c ~/work/gemma2/temp
Valid precision + execution provider c…
-
I'm encountering an issue where gradients become NaN during the training of the Gemma2 model with transformers and flash-attn. I used soft-capping for training.
Environment:
transformers @ git+h…
-
### What is the issue?
Compared to 9b, 27b is ridiculously slow. Is it because of the structure?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.49 Pre-release
-
**Bug Description**
Unable to configure LiteLLM as the provider pointed to a locally running ollama server. It is possible this is user error and I don't have the provider configured correctly, but I…