-
Cannot use the demo code for inference.
Transformers==4.46.2
Torch==2.4.0
-
![1729948513287](https://github.com/user-attachments/assets/9e6e4021-00a6-4ff5-a5f3-44cf2294495f)
-
Can I replace phi3 llm? For example, use qwen2 or llama3.2 instead
-
## 🚀 Model / language coverage
Support the https://huggingface.co/microsoft/Phi-3-mini-128k-instruct model. This is a tracking issue.
Dynamo is splitting this into 13 subgraphs. The good news is…
-
**Describe the bug**
After some of the changes, phi3 sample with inference flag stopped to work
**Olive logs**
```
python phi3.py --target cpu --precision int4 --inference --prompt "Write a stor…
-
**The bug**
Received the following error when using gen() with phi-3.5-mini
```Error
RuntimeError: Bad response to Guidance request
Request: https://model_name.region.models.ai.azure.com/guidance …
-
I happened to have ollama running while setting up smartcat. It does not have the phi3 model installed. Here is the output from running `sc` for the first time:
```
❯ sc
Prompt config file not foun…
-
I have used command `tune run generate --config custom_quantization.yaml prompt='Explain some topic'`to generate inference from finetuned phi3 model through torchtune
Config custom_quantization.y…
-
**Describe the bug**
The memory consumption keeps growing when using the library.
**To Reproduce**
Steps to reproduce the behavior:
1. Build the phi3 command line application from the examples/c/src…
-
I installed the latest version of pytorch and confirmed installation 2.4.1+cu124, when I ran an image generation I got the following message:
:\OmniGen\venv\lib\site-packages\transformers\models\ph…