-
I followed all the step to use the swift api/package and I have used gemma-2b-q4f16 model . But it is giving me this error. I also checked in tvm file I have the executable.cc present in the cor…
-
https://github.com/Beomi/InfiniTransformer/blob/d3659c3c2f50038ba8e64d29139c0aa3701964dc/modeling_gemma.py#L837
I think 'norm_term_broadcastable' should be multiplied by 'query_states'.
![image](htt…
-
## 🐛 Bug
## To Reproduce
Steps to reproduce the behavior:
1.download from apk from https://llm.mlc.ai/docs/ [Android tab ]
2.install apk
3.click download button, download base model L…
-
For consistency, should it be `io.destination_path` (or perhaps even better `io.out_dir`) etc. in
```bash
python scripts/prepare_alpaca.py \
--destination_path data/alpaca \
--checkpoint_…
rasbt updated
5 months ago
-
When compiling with `make -j4 gemma`, I get the following error.
```
[100%] Linking CXX executable gemma
CMakeFiles/gemma.dir/gemma.cc.o: In function `std::filesystem::__cxx11::path::path(std::__…
iSach updated
4 months ago
-
-
老师您好,我在本地复现过程中出现了一点问题,现在有一些问题想向您请教一下,我选择在本地部署了gemma-2b,在运行local_problem的runEOH.py代码时为什么还需要输入api_endpoint和api_key,修改了llm_use_local为True和本地的url还是无法运行,请您指教一下。期待您的回复
-
# below code
### my laptop specs : mac m1 max 64GB, macOs : 14.5
#### The code below is a test code that loads the model and generates for fine tuning.
from mlx_lm import generate, load
import…
-
### System Info
```shell
Optimum main branch, commit bb21ae7f7d572805f6ecdea8e0f02dc6014d57e8
Transformers 4.38.1
OnnxRuntime 1.17.1
PyTorch 2.2.1
TensorRT 8.6.1 (nvcr.io/nvidia/tensorrt:23.10…
-
I have made Gemma executable to run on Android arm64-v8a with below option.
> cmake -DCMAKE_TOOLCHAIN_FILE=/usr/lib/android-sdk/ndk/25.1.8937393/build/cmake/android.toolchain.cmake .
And it runs…