-
Hello,
When chatting with the bot, I often encounter this error:
File "C:\Repos\AI_project\Demo\demo_2024_05_02\RAG_GPT_OpenAI\src\utils\chatbot.py", line 60, in respond
retrieved_content…
-
I am trying to add a service from a provider that has OpenAI compatible API. Here is my code snippet:
```
kernelBuilder.AddOpenAIChatCompletion(
serviceId: "llama-3-8b",
modelId:…
-
### Your current environment
The output of `python collect_env.py` on ROCm
Collecting environment information...
WARNING 09-11 03:28:33 rocm.py:17] `fork` method is not supported by ROCm. VLLM_WO…
-
- [ ] [LLaVA/README.md at main · haotian-liu/LLaVA](https://github.com/haotian-liu/LLaVA/blob/main/README.md?plain=1)
# LLaVA/README.md at main · haotian-liu/LLaVA
## 🌋 LLaVA: Large Language and Vi…
-
### Is your feature request related to a problem?
If we have to implement functionality like https://chat.openai.com/ where answer is rendered word (or few words) at a time instead of getting the com…
-
Can't get it working unfortunately.
I installed node, and everything, Using latest Chrome on Windows 10.
next-gpt3-chatbot@0.1.0 C:\webdev\talk-with-gpt
+-- @chakra-ui/react@2.5.3
+-- @emotion/…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
hello, i'm working a text to sql solution with llama index , llama cpp and duck db llm. …
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A…
-
### Your current environment
vLLM version: v0.6.0 (CPU)
CPU: AMD EPYC 9654
### 🐛 Describe the bug
vLLM v0.6.0 (CPU) server failed to start on setting VLLM_CPU_OMP_THREADS_BIND as shown below:
…
-
- [ ] [RichardAragon/MultiAgentLLM](https://github.com/richardaragon/multiagentllm)
# RichardAragon/MultiAgentLLM
**DESCRIPTION:** "Multi Agent Language Learning Machine (Multi Agent LLM)
(Update)…