-
How to start the server(ubuntu 22.04):
```bash
python s2s_pipeline.py \
--recv_host 0.0.0.0 \
--send_host 0.0.0.0 \
--lm_model_name microsoft/Phi-3-mini-4k-instruct \
--init_chat_role system…
-
### Issue: Implementing Iterative DPO on Phi3-4k-instruct
Hi, thanks for the great work and open source!
I am trying to implement iterative DPO on `Phi3-4k-instruct`. The following outlines my…
-
## Describe the bug
With `Mistral-7B-Instruct-v0.3-Q4_K_M.gguf` from https://huggingface.co/bartowski/Mistral-7B-Instruct-v0.3-GGUF I'm seeing this behavior:
```
$ mistralrs-server -i gguf -m .…
p-e-w updated
2 months ago
-
I'm fiddling with transformers_js_py and getting a:
Unknown typed array type 'BigInt64Array'. This is a problem with Pyodide, please open an issue about it here: https://github.com/pyodide/pyodide/…
-
With limited memory on most of phones, there's community requests on supporting a model with a smaller size like Phi-3 mini. It may be supported out of box, but need to verification, evaluation and pr…
-
I am running the `Phi-3-mini-4k-instruct-onnx` model on desktop CPU, and one behavior I have noticed is that, after the back and forth conversation is longer than half of the context window length (in…
-
When I run an inference in alpaca format several times it starts to look like this with phi 3 mini:
![image](https://github.com/unslothai/unsloth/assets/39015765/b4a49bbf-def3-4c98-aa0d-391dfd027fc…
-
After https://github.com/ml-explore/mlx-swift-examples/commit/ab94ffc2f31a70ead3c7007afaf97a225ed3ec90, I'm getting a crash the second time I try to generate text with my app, which uses mlx-libraries…
-
Please allow to use `microsoft/Phi-3-mini-128k-instruct` model in the [candle-phi](https://github.com/huggingface/candle/tree/main/candle-examples/examples/phi) example, which uses the LongRope scalin…
-
Hi,
I have tried adding phi3-3.8b, as an ollama model, hosted on my own prem ollama server.
I have basically copied the prompt template and parameters from microsoft/Phi-3-mini-4k-instruct used in h…