-
Hi
can you also implement gemma model to compare with llama
best regards
-
Hi team, I checked the locallama and found that gemma can work well with the Self-Extend method. It would be awesome if this technique could be added to the gemma.cpp.
References:
- [locallama](http…
-
Unrecognized model in D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\Joy_caption_two\text_model. Should have a `model_type` key in its config.json, or contain one of the fo…
-
### 🚀 The feature, motivation and pitch
Trying to run a Gemma 2 model on VLLM TPU gets the error not implemented for pallas backend
But searching on pallas kernel they do have support for logit s…
-
## Symptoms
I found that using google/gemma-9b-it raises an error by stating that below.
```
(Some(_), Some(_)) => panic!("both hidden_act and hidden_activation are set"),
```
-
https://huggingface.co/blog/gemma-peft
-
Why speed does not increase with AWQ? I have a model gemma 2 9B. With one A100.
with float16 the benchmark is 4267.62 tokens per second
with awq 4 bit the benchmark is 4963.73 tokens per second
I ex…
-
https://huggingface.co/google/recurrentgemma-2b-it
Support for recurrent gemma
-
when trying to run models downloaded not from gpt4all the application crashes. The required models were downloaded to the required folder
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ ] I am running the latest code. Development is very rapid so there are no tagged versions as of…