-
Currently, `evaluation.yaml` exists under the `configs/` directory. To start, we wanted to just showcase this recipes as an example, but it is a core part of the finetuning process and therefore shou…
-
FAILED plugins/validation_tests/test_object_creation.py::test_all_suts_can_evaluate[gemma-9b-it-hf] - modelgauge.secret_values.MissingSecretValues: Missing the following secrets:
scope='hugging_face' …
-
## 🐛 Bug
Not sure if this is a feature request or bug. I took the [SPMD Gemma ft code from Hugging Face](https://huggingface.co/google/gemma-7b/blob/main/examples/example_fsdp.py) and tried to run …
-
I tried to use ctranslate2 as the inference framework to do model inference, but failed with error as below:
"axis 2 has dimension 8192 but expected 7680"
What I've done:
1. First I must con…
-
Linear Regression has been implemented on single core.
Exact same output upto 6 decimal places :D .
To test
```sh
./build/faster_lmm_d --geno=data/gemma/BXD_geno.txt.gz --pheno=data/gemma/BXD.ph…
-
I found that the current repository configuration is not compatible with Gemma2. The reason might be that transformers and vllm are not fully compatible with Gemma2. Could you share the package config…
-
### Description of the bug:
I ran the Gemema-7B model based on the code in the example, and found that the model's answers were rather poor and didn't seem to understand my question at all. Is this …
-
修改SGLang 框架对 BAAI/bge-reranker-v2-gemma 模型适配,并进行推理加速
## 测试配置
- **硬件**: NVIDIA A10 GPU
- **序列长度**: 512
## 性能结果
下表对比了不同批量大小下,原始推理与 SGLang 加速推理的时间表现:
此次推理过程中使用的 query 和 document 数据均为随机生成,长度和为…
-
Since the latest models, such as Llama 3 and Gemma, adopt extremely large vocabularies (128-256K), the size of logits can become very large, consuming a large proportion of VRAM. For example, the foll…
-
Hello,
Beam later version is V2 and they did drastic changes to their SDK and client that makes most of the training (fine-tuning) and inference code useless. There is no "beam run" and so on...
…