-
### 🐛 Describe the bug
The model: https://github.com/google/gemma_pytorch
To enable Dynamo I'm adding `@torch.compile()` (assuming cpu backend) annotation to [this line](https://github.com/google/ge…
-
老师您好,我在本地复现过程中出现了一点问题,现在有一些问题想向您请教一下,我选择在本地部署了gemma-2b,在运行local_problem的runEOH.py代码时为什么还需要输入api_endpoint和api_key,修改了llm_use_local为True和本地的url还是无法运行,请您指教一下。期待您的回复
-
I was trying to run Gemma from Goolge and specificed:
mlx-community/quantized-gemma-2b-it | Gemma 2b quantized
in the models .txt file, it downloads it but returns an error w…
alew3 updated
6 months ago
-
I have made Gemma executable to run on Android arm64-v8a with below option.
> cmake -DCMAKE_TOOLCHAIN_FILE=/usr/lib/android-sdk/ndk/25.1.8937393/build/cmake/android.toolchain.cmake .
And it runs…
-
# below code
### my laptop specs : mac m1 max 64GB, macOs : 14.5
#### The code below is a test code that loads the model and generates for fine tuning.
from mlx_lm import generate, load
import…
-
```shell
./gemma --prompt="How to write a good article"
avx: false, neon: true, simd128: false, f16c: false
temp: 0.00 repeat-penalty: 1.10 repeat-last-n: 64
retrieved the files in 9.415834ms
Ru…
-
Hi,
Could you confirm which commit of [google/grain](https://github.com/google/grain) to use when converting the Gemma weights?
It returns an error when using latest commit of both `maxtext` and…
-
I am running a program to summarize a couple of texts, once a part.
(Factly, I am parse a c++ source file that has many functions, [function by function](https://github.com/zeerd/gemma.qt/blob/main/…
zeerd updated
4 months ago
-
Hello, after app not responding close it delete Gemma 2b model and download again > start chating get error:
MLCChat failed
Stack trace:
org.apache.tvm.Base$TVMError: InternalError: Check failed: (ch…
-
### Feature request
Currently `ConversationalPipeline` expects a `conversation` as input (a list of custom objects).
Sometimes, you may want to quickly feed in a piece of text as a simple `str` …