-
Initial reports can be seen from https://github.com/ggerganov/llama.cpp/pull/8227
> [!IMPORTANT]
> A note for everyone: if you think there's a bug in llama.cpp tokenizer, please make sure to tes…
-
Hi, I wanted to ask if the support for Gemma2 models can be added in this binding? Specifically I was trying to load the `gemma2-2b-it` model but wasn't able to because the ModelType enum only support…
-
Hi Team,
Looks like Gemma 2 is not supported by candle yet
-
How can i do function calling using Gemma on local Hardware ?
-
Unable to successfully perform inference tasks on Google Pixel 4 device, the error message is as follows:
```log
17:04:57.271 Remote...onImpl W requestCursorAnchorInfo on inactive InputConnect…
-
I am using Gemma-2B and it is not saving checkpoints at all. It hangs (no error, just waiting forever). I use 4 gpus, but even if memory usage is very low (5 GBs out of 24 GBs available, per each gpu)…
-
### Model description
The model is loaded but strangely it has capabilities of embedding and no rerank
````
{
"id": "BAAI/bge-reranker-v2-gemma",
"stats": {
"queue_fraction…
-
### MediaPipe Solution (you are using)
Version: 0.10.14
### Programming language
_No response_
### Are you willing to contribute it
None
### Describe the feature and the current behaviour/state
…
-
Dear authors,
Great work, thanks for sharing.
I am trying to fine-tune bge-reranker-v2-gemma using my own dataset.
However, according to the officail finetuning command provided:
```bash
…
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A…