-
https://huggingface.co/google/recurrentgemma-2b-it
Support for recurrent gemma
-
### System Info
```shell
Optimum version: d87efb2
Transformers version: d479665
ONNXRuntime version: 1.17.1
ONNX version: 1.15.0
```
### Who can help?
@michaelbenayoun @echarlaix
### Informat…
-
Thank a lot for your great work!
I deployed gemma-2b locally. I would like to understand how to have multiple rounds of dialog effectively.
I searched the internet and found that I could type in p…
-
I am building a container image on top of the official `ollama/ollama` image and I want to store in this image the model I intend to serve, so that I do not have to pull it after startup. The use case…
-
@danielhanchen Hi Daniel, thanks for your work!
having an error just like in the issue #275, but this time while trying to save tuned version of unsloth/gemma-2-9b-it-bnb-4bit.
>> model.save_p…
-
I'm encountering a RuntimeError when attempting to save checkpoints during fine-tuning the "unsloth/gemma-2b-it-bnb-4bit" model. Below is a breakdown of my setup and the error encountered.
Model:…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue y…
-
### Description
When tokenizing a text and decoding these tokens, one can see that tokenization now (as of version 0.14.0) adds one additional starting space to `text` for every call of `Context.Toke…
-
### Your current environment
Collecting environment information.
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
0S: Cent0S Li…
-
Hello,
In running chat ui and trying some models, with phi3 and llama i had no problem but when I run gemma2 in vllm Im not able to make any good api request,
in env.local:
{
"name": "google/g…