-
You can add parameter for Gemma
-
Hi, are there any plans to add Google Gemma? Recently released versions works pretty well on LM Studio!
-
[Groq](https://groq.com) provides an [OpenAI compatible API](https://console.groq.com/docs/openai) to several LLMs e.g. LLaMA3 8b, LLaMA3 70b, Mixtral 8x7b, Gemma 7b (documented on the [models page](h…
-
For the CHAT command
https://huggingface.co/blog/gemma
-
## 🚀 Feature
120 Mixology runs are failing due to:
```python
raise ValueError("LayerNorm is currently not supported by Thunder!")
```
### Additional context
```
set -e
export PYTORCH_C…
-
```
~/gemma.cpp$ cmake --build build -j 4
[79/81] Linking CXX executable benchmark.exe
FAILED: benchmark.exe
C:\WINDOWS\system32\cmd.exe /C "cd . && C:\msys64\clang64\bin\clang++.exe -O3 -DNDEBUG …
-
Hi!
I notice that Gemma-generated summary has some issues when it "hallucinates" the specific token "▁viciss" (id: 200507, as found on [tokenizer file](https://huggingface.co/google/gemma-7b/blob/m…
-
I downloaded weigths from `https://huggingface.co/google/gemma-2b-cpp/blob/main/2b-pt.sbs` but can't make them run:
```
$ ./gemma --tokenizer ~/Downloads/gemma-2b-pt/tokenizer.spm --model 2b-it --…
-
Using the following code yields a no-support error. Would love to see the model supported since it's currently one of the few Finnish-language LLMs.
```
from unsloth import FastLanguageModel
impo…
-
I am building a container image on top of the official `ollama/ollama` image and I want to store in this image the model I intend to serve, so that I do not have to pull it after startup. The use case…