-
-
I cannot get the Colabs to run on https://colab.research.google.com.
I had to replace
```
!pip install https://github.com/deepmind/gemma
```
with
```
!pip install "git+https://github.…
-
@danielhanchen Ran into ModuleNotFoundError: No module named 'triton' while fine-tuning google/gemma-7b-it. I installed xformers successfully through a documentation that I found by Unsloth but whil…
-
Environment
* Deployed a gemma-7b-it model on Vertex AI Model Garden using the "Deploy" button from the Gemma card. No additional tuning was done.
* I have an instance running on a g2-standard-12 ma…
-
**Describe the bug**
When attempting to shard a `gemma_2b_en` model across two (consumer-grade) GPUs, I get:
```
ValueError: One of device_put args was given the sharding of NamedSharding(mesh=…
-
Hi!
I notice that Gemma-generated summary has some issues when it "hallucinates" the specific token "▁viciss" (id: 200507, as found on [tokenizer file](https://huggingface.co/google/gemma-7b/blob/m…
-
Servers stops responding after one API call, and chat starts streaming ``.... or on gemma replies only with GGGGGGGGGs
tfius updated
1 month ago
-
### Your current environment
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.…
-
I found that the current repository configuration is not compatible with Gemma2. The reason might be that transformers and vllm are not fully compatible with Gemma2. Could you share the package config…
-
# Motivation
As described [here](https://genai.stackexchange.com/questions/699/how-to-set-ollama-temperature-from-command-line) ollama doesn't allow to set the temperature via CLI means directly but …