-
The adversarial attack is broken for the Gemma 2 models
Code:
```
import nanogcg
import torch
from nanogcg import GCGConfig
from transformers import AutoModelForCausalLM, AutoTokenize
mod…
-
how to finetune with gemma model?
-
I'm not sure. Maybe I'm doing something wrong, but somehow, when the model gives an answer, it's always exactly the same, no matter how many times i press regenerate button. Same question = same answ…
-
Hi there,
I got the same error when I export mistral gptq model to onnx using
`python -m qllm --load TheBloke/Mistral-7B-Instruct-v0.2-GPTQ --export_onnx=./mistral-7b-chat-v2-gptq-onnx --pack_mode…
-
I see some of my runs missing tensors compared to the ones that when I load to work on my evals e.g., see wrong
```bash
(AI4Lean) root@miranebr-math-p4de-math-aif-sft:~/data/runs/10092024_03h17m15s_…
-
### What happened?
Sample times are greatly increased with --top-k 0, especially with Gemma models.
### Name and Version
version: 3570 (4134999e)
built with Apple clang version 15.0.0 (clang…
-
### System Info
`peft=0.12.0`
`transformers =4.44.0`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
-…
-
### Model ID
google/gemma-2-27b-it
### Model type
Decoder model (e.g., GPT)
### Model languages
- [x] Danish
- [x] Swedish
- [x] Norwegian (Bokmål or Nynorsk)
- [x] Icelandic
- [x] Faroese
- [x] …
-
I downloaded then copied the model gemma-2b-it-cpu-int4.bin that i got from kaggle, this one
![image](https://github.com/DenisovAV/flutter_gemma/assets/64015162/21b4d5ce-a52d-4f9e-9a49-1f62ebf5f1bc)…
-
Hi,
Thank you for this great software.
Unfortunately, I can't make autocomplete work on my computer.
This is on Windows 10 Pro x64, VSCodium v1.85.1, Release 23348, Privy v0.2.7.
The code explana…