-
**Describe the bug**
When attempting to shard a `gemma_2b_en` model across two (consumer-grade) GPUs, I get:
```
ValueError: One of device_put args was given the sharding of NamedSharding(mesh=…
-
### Context
This task regards enabling tests for **gemma-7b-it**. You can find more details under openvino_notebooks [LLM chatbot README.md](https://github.com/openvinotoolkit/openvino_notebooks/tree…
-
Environment:
- On a fresh install of Ollama + ollamac, using Gemma 2 9b
- macOS: 14.6.1 (23G93)
- M1 Pro, 32GB
Scenario 1 Steps:
- Start Ollama
- Open Ollamac (fresh install)
- Pick any mo…
-
For example, [gemma-2-27b-bnb-4bit](https://huggingface.co/unsloth/gemma-2-27b-bnb-4bit) has 14.6 B parameters, while the main model https://huggingface.co/google/gemma-2-27b, has 27.2 B parameters?
-
The @PavlidisLab/curation team compiled a list of single-cell experiments that need special treatment on import. We can convert those into test cases.
## Bulk/single-cell mix
Contains a mixture …
-
### 🚀 The feature, motivation and pitch
Thanks for fixing the soft-capping issue of the Gemma 2 models in the last release! I noticed there's still a [comment](https://github.com/vllm-project/vllm/bl…
-
Hi everyone,
Version:
```
langchain-google-vertexai 1.0.5
```
I'm having issues with the `GemmaLocalKaggle` and `GemmaChatLocalKaggle` classes. They always use the model named "gemma_2…
-
I'll beautify this once I get hold of Azure storage.
I have attached [gemma_7b.mlir](https://storage.googleapis.com/shark_tank/dan/Gemma/gemma_7b.mlir) along with [gemma weights](https://storage.go…
-
### Question
I'm new to dev and wanted to know if converting a gemma 2b using the Optimum converter would work for this model?
-
When looking at a subset (i.e. https://dev.gemma.msl.ubc.ca/expressionExperiment/showExpressionExperimentSubSet.html?id=11403&dimension=8420), the only information regarding the factor value is encode…