unconv / gpt-autopilot

A GPT-4 powered AI agent that can create full projects with iterative prompting
MIT License
304 stars 95 forks source link

support for Open-Assistant.io/gpt4all #15

Open vasnt opened 1 year ago

vasnt commented 1 year ago

instead of using openai API, can it work with other open-source/local hosted models like llama etc.?

unconv commented 1 year ago

Currently not, because it uses the new ChatGPT API function calling. I will try to "backport" it to be able to use the older models without the function calling, and then you should be able to switch to using a different model. In theory.

sarfraznawaz2005 commented 8 months ago

would be great if you add OPENAI_API_BASE to config file so we can use this with local modals. Thanks

unconv commented 8 months ago

@sarfraznawaz2005 Do any local models support function calling?

sarfraznawaz2005 commented 8 months ago

Not sure what you mean but there are local models that support openai api format such as https://github.com/josStorer/RWKV-Runner or https://lmstudio.ai. They expose local url and many similar projects now support them by exposing option for base url in which case api key is not taken into account.

autopilot strictly checks for api key otherwise gives errror. If you allow option of base url and if it is local then you can modify code to skip checking for api key.

unconv commented 8 months ago

GPT-AutoPilot uses the OpenAI API's function calling feature which is not implemented in other LLM's than ChatGPT and Gemini as far as I know.

You can set the OPENAI_API_KEY environment variable to whatever and set OPENAI_BASE_URL to the URL of the local LLM to try it, but probably the endpoints do not support function calling.

I'm working on another project in TypeScript that does not use the function calling feature, and I might integrate it into this project as well, if I get it to work.

sarfraznawaz2005 commented 8 months ago

yes I tried that way it does not work. Looking forward to other project :)