-
gemma2 support
👉👉👉[我的哔哩哔哩频道](https://space.bilibili.com/3493277319825652)
👉👉👉[我的YouTube频道](https://www.youtube.com/@AIsuperdomain)
win4r updated
3 months ago
-
### Description
I am trying to fine-tune Gemma 2 on TPU and got the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/jax/_src/compiler.py", l…
-
Hi guys, first of all many thanks for this project. This has allowed me to finetune some models, which I have not been able to do using other alternatives. Now I wanted to finetune gemma2, which seems…
x1250 updated
2 months ago
-
I found that the scripts in GEMMA do not support GEMMA2. Is there any plan to add support for GEMMA2?
-
**Describe the bug**
At least with gemma2-27b, context length setting in Ollama model settings appeared to do nothing.
To get larger context length, I had to create an ollama modelfile with `PARAMET…
-
### MediaPipe Solution (you are using)
Android library:com.google.mediapipe:tasks-genai:0.10.14
### Programming language
Android Java
### Are you willing to contribute it
None
### De…
-
Hello, when gemma2 models will be supported?
![obraz](https://github.com/lmstudio-ai/lmstudio-bug-tracker/assets/22175646/4159a7bd-163d-4be3-82a2-1553bbecaa5b)
-
# :grey_question: Context
As mentionned earlier :
- https://github.com/aurelio-labs/semantic-router/pull/346
A gave a try to semantic-router and got really impressive results, see [🔀 Semanti…
-
I managed to run other models like `gemma2` and `phi3.5` by changing the lines
```
# TODO: Eventually this will move to the llama cli model list command
# mapping of Model SKUs to ollama models
O…
-
I tried to reproduce your gemma2B reward model training again and found that the reward model architecture fine tuned with internlm2 had an output header of 1. I downloaded your GRM-Gemma-2B-Sftrug re…