-
I've trained Gemma 2B in 16bit with LoRA. With adapters loaded separately everything works just fine. But after merging the adapters, the model becomes literally unusable.
![image](https://github.c…
-
Hi,
I'm facing the following issue when trying to chat with Ollama:
```
04/17/2024 01:13:07 PM utils.py 273 : Failed to get max tokens for LLM with name gemma. Defaulting to 4096.
Trac…
-
Hi there, I encountered a strange bug after trying to load the gemma-2b model using kerasnlp.
My finetuning code is the following:
` def fine_tune(self, X, y):
data = generate_train…
-
# Experiments
Idea: Repeat most of the unlearning experiments (continuous, batch, sequential) with harmfulness and evaluate. Based on the results decide the best hyperparameters for unlearning fren…
-
### Description of the bug:
I ran the Gemema-7B model based on the code in the example, and found that the model's answers were rather poor and didn't seem to understand my question at all. Is this …
-
E-mailed her 4 days ago.
Awaiting response.
-
Evaluating gemma-2b with xcopa looks good, but the xnli result looks weird.
xcopa result:
```
"results": {
"xcopa_zh": {
"acc,none": 0.616,
"acc_stderr,none": 0.021772369465…
-
Hello, I'm following the instructions provided in your Readme, and when I run `pip install git+https://github.com/google-deepmind/gemma.git`, it throws me an error that says **"subprocess-exited-with-…
-
I'm running on WSL2/Ubuntu on Win11. Deliberately using CPU mode as my GPU is too weak. Using Python 3.10.12.
Here is the output when trying to run sampling.py:
```
~/gemma$ python3 examples/sa…
-
I say
"你好"
it replys
"你好,
希望好。
好"
but it works correctly for Quyen-SE