-
Dears,
I failed to run Llama-2-7b-chat-hf on NPU, please give me a hand.
1. I converted the mode by below command, and got two models,
a) optimum-cli export openvino --task text-generation -m Meta-…
-
Following local finetuning README
Ran `python gradio_chat.py --baseonly`
Got:
```
(phi-3-env) hayden@XPS15:/mnt/d/phi-3-env/inference$ python gradio_chat.py --baseonly
Number of GPUs availa…
-
**Describe the bug**
I followed your instructions to install iLab, but it doesn't start chat
**To Reproduce**
Steps to reproduce the behavior:
1. Go to installation directory
2. Type 'ilab …
-
### Issue
(myenv) dmtarmey …/aider-chat main !?⇡ v22.8.0 13:49 pytest
=========================== test session starts ===========================
platform linux -- Python 3.12.…
-
En la interface web, los archivos se agregan bien, sin error.
Pero luego al querer chatear da error:
![image](https://github.com/fcori47/basdonax-ai-rag/assets/1769919/2084e6fc-92e8-4b3e-9ad0-fe17…
-
Hello,
- When running the `main_chat.py` with bf16 precision I get an OOM error. I am using a 24GB GPU. Is this expected? Can't find info about the minimal gpu requirement.
- If I enable fp16 preci…
-
### Describe the issue
according to the [Local-LLMs/](https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs/), the autogen can support multiple local llm.
my command for fastchat
First,…
-
### System Info / 系統信息
```bash
cuda 12.3
python 3.11
torch==2.3.1
vllm==0.5.3.post1
vllm-flash-attn==2.5.9.post1
transformers==4.44.1
````
### Who can help? / 谁可以帮助到您?
_No response_…
-
I've run the benchmark_genai.py for CPU, GPU, NPU on MTL U9, here is the logs:
(env_ov_genai) c:\AIGC\openvino\openvino.genai\samples\python\benchmark_genai>python benchmark_genai.py -m c:\AIGC\openv…
-
```
Traceback (most recent call last):
File "/Users/admin/dev/multimodal-chat/multimodal_chat.py", line 2576, in
main(args)
File "/Users/admin/dev/multimodal-chat/multimodal_chat.py", lin…