-
Hi, I am working with guardrail output only but got the issues that it can go to the flow but return nothing.
Here is my config code
```
rails:
input:
flows:
- main
output:
flow…
-
I noticed that when I type something, it looks like auto suggest is showing up with examples.
I think I need to press enter to get the LLM response.
But it could easily be confused that the au…
-
### Project Title
An LLM app with a deeper understanding of a [GitHub repo](https://github.com/staru09/Github_analyser)
### Motivation
It becomes challenging to review PRs and solve issues fo…
-
I run phi3 :ollama run phi3:3.8b
And I test:
`curl http://localhost:11434/api/generate -d '{
"model": "phi3:3.8b",
"prompt": "Why is the sky blue?",
"stream": false
}'` , I can got respons…
-
```
llm_cfg = {
# Use the model service provided by DashScope:
'model': 'qwen-vl-max-0809',
#'api_key': 'YOUR_DASHSCOPE_API_KEY',
# It will use the `DASHSCOPE_API_KEY' environment…
-
Work with #4 person to implement memory. RAG may be useful here?
These are the things the bot should remember:
- Things that each user in the server has said
- The bot's opinion of each user in t…
-
It would be interesting to provide a research topic then have 2 or more bots talk to each other, user provides a structure to the conversation and see the output.
This would be an interesting way to…
-
### Bug Description
We are using llama-index to power a chatbot. We have observed that sometimes GPT-4o (and GPT-4 and GPT-4-turbo) responds with both a tool call and a response, like:
> This is a…
psyho updated
2 weeks ago
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
chat_engine = index.as_chat_engine(
chat_mode="condense_plus_context",
memory=…
-
```
2024-08-01 16:13:27.848 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\KraaiduToit\AppData\Local\Programs\Python\Python312\Lib\site-packages\streamlit\runtime\script…