Closed rainj-me closed 1 year ago
Have you found the part of the code base that makes the POST request to OAI API? It should be in the .js section since it seems to use node
I'd be interested to find out of it was viable with anything other than GPT4. From my experience, even 3.5-turbo has trouble performing decent agentic behaviour.
Answering my own question; from the paper: "GPT-4 significantly outperforms GPT-3.5 in code generation and obtains 5.7× more unique items, as GPT-4 exhibits a quantum leap in coding abilities. This finding corroborates recent studies in the literature" "The GPT-4 API incurs significant costs. It is 15× more expensive than GPT-3.5. Nevertheless, VOYAGER requires the quantum leap in code generation quality from GPT-4 (Fig. 9), which GPT-3.5 and open-source LLMs cannot provide"
I replaced GPT-4 with GPT-3.5 for all agent_models in this project. I noticed that the generated code was worse.
Perhaps the PaLM2 API could be substituted, if the embedding model was also swapped out.
@Ellen7ions I plan to use GPT4, but I don't know about the cost.
I don't have access to the GPT4 API, so I only got to play with the 3.5, and it definitely has more issues -- thinking it hasn't completed tasks it has, making tasks that cannot be reached at the current state, ... You could hack your way to setting up a local LLM and use that instead of the APIS somehow.
I recently published a package llm-client that can be very helpful in enabling the support to run other LLM models, including OpenAI, Google, AI21, HuggingfaceHub, Aleph Alpha, Anthropic, Local models with transformers.
Thank you all for the helpful discussion! We found that GPT-4 is significantly better than GPT-3.5 in terms of reasoning and code generation. Most open-source LLMs are not even on par with GPT-3.5.
Our implementation relies on langchain's ChatOpenAI
to interface with OpenAI GPT-4/GPT-3.5 (e.g., code snippet in ActionAgent). In practice, you can replace that part with other LLMs, but you probably need a few more changes, such as merging the system message and the human message into a single prompt since they are special concepts for chat models (see langchain's doc for more details). At this moment, we do not have plans to integrate other LLMs, but we will inform you if this changes in the future.
Thanks for your understanding!
at the very least it could be made more friendly to other LLM's and I think that is a worthy change for consideration.
@uripeled2 Good job. I saw the project. Actually When we want to change the LLM, we have to change the langchain code.Use this llm-client It can easily change the LLM for testing this project.
I replaced GPT-4 with GPT-3.5 for all agent_models in this project. I noticed that the generated code was worse.
Me, too. The codes from GPT-3.5 often raise errors.
I think it would be an interesting research topic to find out factors that make GPT-4 so essential to the "education" of Voyager.
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
Is it possible to integrate with gpt4all instead of openai?