-
-
Hi there
did someone have any success in awq quantize gemma2-27b-it model, i have two rtx 3090 and 128g ram in my machine
but this code
```
import torch
from awq import AutoAWQForCausalLM
fr…
-
The conversation interface I am using is compatible with GPT's interface, I tried to modifying the BaseUrl and Authorization in scripts js file. However, it always shows 'Cannot read properties of und…
-
Gemma 7B: https://huggingface.co/google/gemma-7b
Gemma 2B: https://huggingface.co/google/gemma-2b
Blog: https://blog.google/technology/developers/gemma-open-models/
Paper: https://storage.googleapi…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
#### Description
I encountered crashes in my application when attempting to load the `gemma-2b-it.gguf` and `Phi-3-mini-4k-instruct-q4.gguf` models. Below are the error messages and details for eac…
-
For example, [gemma-2-27b-bnb-4bit](https://huggingface.co/unsloth/gemma-2-27b-bnb-4bit) has 14.6 B parameters, while the main model https://huggingface.co/google/gemma-2-27b, has 27.2 B parameters?
-
Just to ensure that FIM is on the radar at jupyter-ai I leave this comment here.
FIM ([Fill-in-the-Middle](https://medium.com/@SymeCloud/what-is-fim-and-why-does-it-matter-in-llm-based-ai-53f333855…
-
I'll beautify this once I get hold of Azure storage.
I have attached [gemma_7b.mlir](https://storage.googleapis.com/shark_tank/dan/Gemma/gemma_7b.mlir) along with [gemma weights](https://storage.go…
-
Zodat er geen verkeerd beeld bij leveranciers of onduidelijkheid ontstaat over het belang van GEMMA normen.
Naar aanleiding van #1428