-
I am using "google/gemma-2b-it" model from HuggingFace. I realized there are 99 unused tokens (\ ,\,\...) in first 106 token ids. Does anyone know their purpose? Just wondering.
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
**While running the example code in Readme.md**
`from local_gemma import LocalGemma2ForCausalLM
from transformers import AutoTokenizer
import os
os.environ['HUGGINGFACEHUB_API_TOKEN'] = ''
os.…
-
Hi 👋 ,
It would be really great if you could add support for the Gemma model series (i.e. 2B and 7B variants, particularly the 7B is what I would like most), since I see that it is currently not su…
-
Authentication in code with token=hf_token doesn't work unless you use subprocess.run("local-gemma", "--token", hf_token, "What is the capital of France")
`model = LocalGemma2ForCausalLM.from_pretr…
-
In the [notebook](https://colab.research.google.com/drive/1fxDWAfPIbC-bHwDSVj5SBmEJ6KG3bUu5?usp=sharing#scrollTo=LjY75GoYUCB8) where you mentioned about how absence of `` token affects the training lo…
AvisP updated
1 month ago
-
Hi,
We have run three Google gemma models with Winogrande on MTL or LNL, and we got much lower accuracy than Open LLM leaderboard. The detailed data as below:
Model | Precision | Device | Trans…
-
# Model name
Google Gemma family (7B, 2B, 7B-instruct, 2B-instruct)
# Parameters
Not that I'm aware
# Source
Models are available via huggingface (`transformers`):
7B: https://huggingface.co…
-
Hi, thanks for the interesting project!
I create Gemma 7B based model [webbigdata/C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter).
This model is Huggingface transformer format and …
-
### What is the issue?
Try this in ollama Gemma 2 9B or 27B, it just never stops.
Give a succinct summary of the entire email conversation in not more than 40 words,
Emails To Andrew Fastow:
…