-
It's not providing value and slows down the build. Tools can be implemented in Groovy on top of the existing Gemma CLI module.
-
Could you please add fine-tuning support for gemma-2 ? It has good good multilingual capabilities and is a good candidate for fine-tuning for languages other than English.
Its different sizes also m…
-
Hi,
I'm a bit confused. Should I use Gemma formatting tags during fine-tuning https://ai.google.dev/gemma/docs/formatting , or should I use this template: 'Instruction:\n{instruction}\n\nResponse:\n{…
-
https://developer.nvidia.com/zh-cn/blog/nvidia-tensorrt-llm-revs-up-inference-for-google-gemma/
This post says gemma supports quantization, so does recurrentgemma support quantization?
-
### Description of the bug:
hi @pkgoogle ,
i try to use convert_gemm2_to_tflite.py to treansfer model. i can see 2 error when i transfer.
- this one i can change loader.py to solve it.
```
…
-
### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
### Description of the bug:
I am trying to run an exported TFLite model, specifically the Gemma TFLite model. After downloading both tokenizer.model and gemma-1.1-2b-it-cpu-int8.bin from Hugging …
-
Hi, I wanted to ask if the support for Gemma2 models can be added in this binding? Specifically I was trying to load the `gemma2-2b-it` model but wasn't able to because the ModelType enum only support…
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A…
-
https://x.com/huggingface/status/1841730061067563422?s=46&t=Y6UuIHB0Lv0IpmFAjlc2-Q