-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I want to integrate llamaindex with streamlit with streaming chat using Agents. I would …
-
With the example below I get the following error:
```
Error executing tool. Co-worker mentioned not found, it must to be one of the following options:
- pilot
```
```python
from crewai import …
-
When attempting to use the `OpenAI` class from `llama_index.llms.openai` and providing running the example from this repo, on this line:
```
agent1 = ReActAgent.from_tools([tool], llm=get_tool_llm…
-
### Is your feature request related to a problem? Please describe.
An Agent in AutoGen can take an `llm_config` with multiple models. Currently AutoGen tries the models one by one and uses the firs…
-
I've made some feature using langchain to automate the process of creating agents and tasks.
I'm using AWS Bedrock as my LLM
Basically it will list agents and tasks needed to achieve the goal
…
-
hey all , I am trying to run the evaluation file but it is giving the following errors.
```
(alfworld) srinjoym@user:~/LLM-Planner/src$ python run_eval.py --config gpt4_base_config.yaml
Traceback …
-
import os
os.environ["OPENAI_API_KEY"] = "THE KEY"
from langchain_experimental.agents.agent_toolkits import create_csv_agent
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(temperature=0.…
-
How can I pass a Langfuse callback handler to CrewAI so that the traces are available on their web UI? Here's what I have so far
```
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
…
-
### Describe the issue
First of all, thank you all for making this awesome tool.
I have recently started using autogen, and the use case I am trying to achieve is the sequence of related queries…
-
@mroch @li-boxuan @jeremi @penberg @JensRoland
integrate a feature that can allow user to use multiple llm models in the project with their special expertise
for example :
when user add 3…