-
Hi! We are getting results for phi2 and i see from the comments in run.sh file that the code is tested for phi2.
Are other models (qwen2-7B and llamma3-8B) tested as well? Because we are getting err…
-
Hi team,
any hope of providing support for phi2 in the near future?
-
I only modified t6 instead of t4, t4 t5 both work well for this model,but if we set the thread=6,will always trigger the problem on my XIAOMI14Pro(SM8650 8Gen3)
please check it for resolve
thanks~
…
-
Hey, Is there a simple way to get Phi2 algo added to this?
Not sure if this hashing module helps with that process: https://github.com/kimkkikki/phi2_hashing_module
Thanks!
-
分别跑了Qwen1.5-1.8B-Chat和Qwen-1_8B-Chat,报了类似的问题:
以Qwen1.5-1.8B-Chat举例:
使用tramsformers==4.31.0:
```
Traceback (most recent call last):
File "/data_sdb/demos/mnn-llm/models/llm-export/llm_export.py"…
-
### Repository commit
2e405f397bbcefccc470f215c7ff024875ef16c5
### Python version (python --version)
NA
### Dependencies version (pip freeze)
NA
### Expected behavior
First of thanks for all th…
-
╰─(phi2-mps) ○ phi2-mps --model "/Users/shimura/Documents/weights.npz" --prompt "Hi how are you" --max_tokens 256
Traceback (most recent call last):
File "/Users/shimura/miniforge3/envs/phi2-mps/b…
-
Running the default example doesn't work:
```text
Namespace(verbose=True, batch_size_for_cuda_graph=1, chat_template='', model='.\\example-models\\phi2-int4-directml')
Loading model...
Model loa…
-
I want to convert phi-2 model to MediaPipe format model by above link:
https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference#convert-model
But when I run it I encount the exception's …
-
I'm getting the following error:
```
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|████████…