-
hello. I am getting an error when running the sample below.
The request file does not exist in the original source,
I copied and used the preprocessor_config.json file in the same model family.
…
-
Please add Qwen2 support
```
EETQ_CAUSAL_LM_MODEL_MAP = {
"llama": LlamaEETQForCausalLM,
"baichuan": BaichuanEETQForCausalLM,
"gemma": GemmaEETQForCausalLM
}
```
-
-
```
[Gemma - http-nio-8181-exec-266 (2022-04-21 11:13:33,376)] ERROR ubic.gemma.core.visualization.ExperimentalDesignVisualizationServiceImpl.sortVectorDataByDesign(121) | Did not find cached layout …
-
Affected datasets: GSE86193, GSE207533, GSE6565, GSE29361.
The list is not exhaustive, I just gathered them from error logs.
```
[Gemma - http-nio-8181-exec-360 (2024-02-28 11:51:17,011)] ERROR…
-
### 🚀 The feature, motivation and pitch
Gemma-2 and new Ministral models use alternating sliding window and full attention layers to reduce the size of the KV cache.
The KV cache is a huge inferen…
-
### What is the issue?
I tried to import finetuned llama-3.2-11b-vision, but I got "Error: unsupported architecture."
In order to make sure my model is not the problem, I downloaded [meta-llama/Ll…
-
```
[Gemma - http-nio-8181-exec-274 (2022-04-22 01:31:29,759)] ERROR ubic.gemma.core.visualization.ExperimentalDesignVisualizationServiceImpl.getExperimentForVector(480) | Vector is sliced, but the e…
-
**Describe the bug**
Most of the genomic associations I run results in pve .999.I have decent value peaks that are outliers. I have small sample size.
**To Reproduce**
Steps to reproduce the beh…
-
Hi @danielhanchen
I am trying to fine-tune gemma2-2b for my task following the guidelines of the continued finetuning in unsloth. Howver, I am facing OOM while doing so. My intent is to train gemm…