-
### Bug Description
`NEXT_PUBLIC_GEMINI_MODEL_LIST=+all,+gemini-1.5-pro-exp-0801`
此环境变量未生效,重新部署依然无效
### Steps to Reproduce
docker run -d --name talk-with-gemini \
-p 5481:3000 \
-e GEMINI_…
-
Hello, I've been asking a lot of questions today. After building the Android phone app I created as an example and installing it on a Galaxy S22 model with 12GB of memory, I found that only the Llama …
-
#### Description
I encountered crashes in my application when attempting to load the `gemma-2b-it.gguf` and `Phi-3-mini-4k-instruct-q4.gguf` models. Below are the error messages and details for eac…
-
### What is the issue?
When using the llm benchmark with ollama https://github.com/MinhNgyuen/llm-benchmark , I get around 80 t/s with gemma 2 2b. When asking the same questions to llama.cpp in conve…
-
# Model name
Google Gemma family (7B, 2B, 7B-instruct, 2B-instruct)
# Parameters
Not that I'm aware
# Source
Models are available via huggingface (`transformers`):
7B: https://huggingface.co…
-
I created a tar file out of a unsloth fine-tuned model(base-model: unsloth/gemma-2b-bnb-4bit) using PEFT and pushed it to gcsBucket. I am downloading the artifacts from gcs bucket, extracting the fil…
-
I would like to ask how to correctly use the script for uploading to Zeno. I use the same example as https://github.com/EleutherAI/lm-evaluation-harness/blob/main/examples/visualize-zeno.ipynb
```…
-
### Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
### OS Platform and Distribution
Firebase Hosting
### MediaPipe Tasks SDK version
_No respon…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
windows 11
python 3.12
今日最新源码安装 LLaMA-Factory
### Reproduction
```yaml
### model
model_na…
neavo updated
2 months ago
-
I tried to reproduce your gemma2B reward model training again and found that the reward model architecture fine tuned with internlm2 had an output header of 1. I downloaded your GRM-Gemma-2B-Sftrug re…