-
When I'm running phi-2 on device there is a issue while generating the responses. When I feed it a question it starts generating the response(although quite slow) but it just doesn't stop.
In ChatVi…
-
**Describe the bug**
Cosmetic issue.
Running the code std-out's
```
===== Compressing layer 23/40 =====
2024-08-15T15:22:59.526464+0000 | compress_module | INFO - Compressing model.layers.22.…
-
### What is the issue?
Installed as website claims:
```
ollama run llama3.2
```
Can't even help with basic installation steps or where local files are located:
```
$ ollama run llama3.2
…
-
Hi - I am working on chatbot to answer the questions from the document using RAG method. I have used DSPy framework for prompt tuning. I have done experimentation with DSPy for our use case and comput…
-
this is the error im seeing please help
Last login: Wed Jul 26 13:13:02 on ttys002
rileylovett@Rileys-MacBook-Air ~ % /Users/rileylovett/.venv_new/bin/mentat /Users/rileylovett/Discord2
Files in…
-
when i use the example in multimodel, i download the original model-liuhaotian/llava-v1.5-7b,but some error occur?
llama = from_hugging_face(
File "/usr/local/lib/python3.10/dist-packages/tensor…
-
# Trending repositories for C#
1. [**Navi-Studio / Virtual-Human-for-Chatting**](https://github.com/Navi-Studio/Virtual-Human-for-Chatting)
__Live2D Virtual Human for Chatting bas…
-
**提交 issue 前,请先确认:**
- [x] 我已看过 **FAQ**,此问题不在列表中
- [x] 我已看过其他 issue,他们不能解决我的问题
- [x] 我认为这不是 Mirai 或者 OpenAI 的 BUG
**表现**
2023-11-19 02:48:26.768 | ERROR | framework.universal:handle_messa…
-
### What happened?
I created a function-calling multi-agent framework. I am using the llama-server.exe as an inference server and using Nous research's [Theta Q4 , Q5 and Q6 ](https://huggingface.co/…
-
This indeed greatly improves prompting, although one question may be not very representative for the whole approach. To measure suggested solutions properly, shall we create a test dataset of question…