Closed NalinDadhich closed 1 year ago
🤖
Hello,
Thank you for providing a detailed description of your issue. From your description, it seems like the API chain generated by the create_openapi_agent
function is not varying based on the user query. This is not the expected behavior as the API chain should be determined by the user query.
The create_openapi_agent
function in LangChain constructs an OpenAPI agent from a language model and a set of tools. It does not directly generate an API chain. Instead, it creates an LLMChain
using the provided language model and a prompt generated by the ZeroShotAgent.create_prompt
method. The prompt is constructed using the tools provided by the OpenAPIToolkit
and the specified prefix, suffix, and format instructions.
The create_openapi_agent
function then creates a ZeroShotAgent
using the LLMChain
and the names of the provided tools. Finally, it returns an AgentExecutor
created from the ZeroShotAgent
, the tools, and other optional parameters.
Therefore, the API chain generated by the create_openapi_agent
function depends on the language model, the tools provided by the OpenAPIToolkit
, and the user query passed to the ZeroShotAgent
. Different language models, toolkits, and user queries will result in different API chains.
If the API chain is not varying based on the user query, it could be due to the following reasons:
The language model is not able to differentiate between different user queries and is generating the same plan for all queries. You can test this by using a different language model or by manually inspecting the plans generated by the language model for different queries.
The tools provided by the OpenAPIToolkit
are not able to handle the complexity of the user queries. You can test this by using a different toolkit or by manually inspecting the output of the tools for different queries.
The ZeroShotAgent
is not able to correctly interpret the plans generated by the language model. You can test this by manually inspecting the output of the ZeroShotAgent
for different queries.
There might be a bug in the create_openapi_agent
function or in one of the components it uses. You can test this by debugging the function and its components.
I hope this helps. If you have any further questions or if you need more help debugging this issue, please let me know.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
As pointed out by @dosu-beta. Changing the language model worked.
Instead of using: llm = OpenAI(temperature=0.0)
I used gpt-4 with temperature=0.25 and it worked (llm = OpenAI(model_name="gpt-4", temperature=0.25))
System Info
LangChain version: 0.0.266 Python3.11.4
Sample python snippet:
The sample API spec contains 3 APIs: count_cars, locate_license_plate, and timestamps_of_matching_car_configs. However, I notice no matter what question I ask I see the following API planner response:
openAPI spec definition is as follows: