-
I am using LoneStriker/Phi-3-medium-4k-instruct-8.0bpw-h8-exl2 but the generation is gibberish.
https://huggingface.co/LoneStriker/Phi-3-medium-4k-instruct-8.0bpw-h8-exl2
When I tried the same pr…
-
Greetings. Thanks for your excellent work.
The paper mentioned you adapt phi-3 architecture as OmniGen backend and you apply phi-3 pretrained weights as initialization. I was not familiar with Phi-3…
-
Great sample thanks.
I found the AI answers the question and then starts rambling on about unrelated topics.
after checking the doc on Phi2 here https://onnxruntime.ai/docs/genai/reference/config.ht…
-
To improve sharing rates we should be able to share links that set:
* system prompts
* model name
* maybe user prompt?
This will require:
* Setting the settings from URL query parameters
* A U…
-
The llama.cpp integration within the playbook does not works, anyway i have manually created the gguf file but when i try to serve the model using the llama.cpp server i am getting the following error…
-
If I run `ramalama --nocontainer run 'huggingface://bartowski/Phi-3.5-mini-instruct-GGUF/Phi-3.5-mini-instruct-Q6_K_L.gguf'`
I get the error
```
--nocontainer and --name options conflict. --name…
-
## 🐛 Bug
在下载后的模型点击对话时闪退,浮窗依然显示Initialize...
闪退的分析报告如下:
应用名称:MLCChat
应用版本:1.0
问题发生时间:2024-08-19 14:22:15
问题Trace:
org.apache.tvm.Base$TVMError: ValueError: Check failed: (f != nullptr) is fa…
-
### System Info
```Shell
- `Accelerate` version: 0.34.2
- Platform: Linux-5.4.0-45-generic-x86_64-with-glibc2.31
- `accelerate` bash location: /home/gradevski/miniconda3/envs/summary_explainer_p…
-
Running file of Windows give me numerous
```
$ llava-v1.5-7b-q4.exe
note: if you have an AMD or NVIDIA GPU then you need to pass -ngl 9999 to enable GPU offloading
note: if you have an AMD or NV…
-
Add Phi-3.5-mini-instruct.