-
Hi,
Where is phi-3 small
https://huggingface.co/microsoft/Phi-3-small-128k-instruct
Small is much better then mini
And where is Phi vision?
https://huggingface.co/microsoft/Phi-3-vision-128k-…
-
```py
from unsloth import FastLanguageModel
from unsloth import is_bfloat16_supported
import torch
from unsloth.chat_templates import get_chat_template
from trl import SFTTrainer
from transform…
-
I am interested in running the `mlx-community/Phi-3-mini-128k-instruct-4bit` model with swift, but it cannot be loaded. Here is the output I am seeing:
```
➜ mlx-swift-examples git:(main) ./mlx-run…
-
### Your current environment
I'm not able to run `collect_env.py` on this workstation
vllm == 0.5.1
vllm-flash-attn == 2.5.9
torch == 2.3.0
Tested on a single A100-80GB
The following mes…
-
**Describe the bug**
I've tried to copy the HelloPhi example for loading and calling phi-3.5 in DirectML for my application. When I get to the [generator.ComputeLogits()](https://github.com/microsoft…
-
#### Description
I encountered crashes in my application when attempting to load the `gemma-2b-it.gguf` and `Phi-3-mini-4k-instruct-q4.gguf` models. Below are the error messages and details for eac…
-
use this custom_nodes "[ComfyUI-Phi-3-mini](https://github.com/ZHO-ZHO-ZHO/ComfyUI-Phi-3-mini)
Based on the backend prompt, install flash_attention ,
but,“You are not running the flash-attention i…
-
I cant see downloaded Phi-3 Model in the Choose a Model drop down options even after restarting moxin.
I am on a Macbook.
> $ du -hs second-state/Phi-3-mini-4k-instruct-GGUF/*
2.5G second-state…
-
Looking at how efficient Phi-3-mini is for its size, one might argue that Phi-3-medium's function calling could be somewhere between llama-3-8B and llama-3-70B with your fine-tune?
-
**Describe the bug**
Currently, i use onnxgenai==0.4.0 converted phi_3_5_mini_instruct (fp16 and cuda) and run the infer with onnxgenai on A100 80G.
I observed for some input length around 3000 (800…