-
**Describe the bug**
Most of the genomic associations I run results in pve .999.I have decent value peaks that are outliers. I have small sample size.
**To Reproduce**
Steps to reproduce the beh…
-
### Description
When tokenizing a text and decoding these tokens, one can see that tokenization now (as of version 0.14.0) adds one additional starting space to `text` for every call of `Context.Toke…
-
The cookbook aims to provide a comprehensive guide for researchers and practitioners interested in fine-tuning the Gemma model from Google on a mental health assistant dataset.
Key components of th…
-
**Is your feature request related to a problem? Please describe.**
ShieldGemma was released in Jul 2024 with [v0.14.3](https://github.com/keras-team/keras-hub/releases/tag/v0.14.3), but since then …
-
Hello.
I'm trying to reproduce the results in the leaderboard.
For each model, I run the following script, according to the README.md.
The script is run with python 3.10.15 environment created by…
-
- [x] MiniCPM-Llama3-V-2_5
- [x] Florence 2
- [x] Phi-3-vision
- [x] Bunny
- [x] Dolphi-vision-72b
- [x] Llava Next
- [x] Qwen2-VL
- [x] Pixtral
- [x] Llama-3.2
- [x] Llava Interleave
- [x] …
-
Hi.
I've been fine-tuning Gemma-2-2B-it on Google Colab, saved the fine-tuned model to Huggingface.
When I load the model from Huggingface hub, I keep getting inference errors.
`from unsloth impo…
-
Hi @danielhanchen
I am trying to fine-tune gemma2-2b for my task following the guidelines of the continued finetuning in unsloth. Howver, I am facing OOM while doing so. My intent is to train gemm…
-
**Describe the bug**
git lfs pull --include gemma-2-9b-it-Q8_0_L.gguf
vs
git lfs pull gemma-2-9b-it-Q8_0_L.gguf (typed accidentally)
does not make it very clear how many files, or how much data …
-
**Qwen2**
warning: not compiled with GPU offload support, --n-gpu-layers option will be ignored
warning: see main README.md for information on enabling GPU BLAS support
Log start
main: build = 2…