gpt-engineer-org / gpt-engineer

Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
MIT License
51.91k stars 6.76k forks source link

Use without API? #54

Closed gitihobo closed 1 year ago

gitihobo commented 1 year ago

Is there a way to use this without an API key?

psirdev commented 1 year ago

Maybe you need to implement your own GPT. It would be good to support for external API's or certain models, but would be hard to adapt.

gitihobo commented 1 year ago

Magics my middle name my friend! No I am mainly interested in a LLM local install so I don't have to pay more than I already do

r7l commented 1 year ago

People making fun of you for no reason. While i guess we won't see support for gpt4free in the near future, you could try using oobabooga with the OpenAI extension. But it's said to not work so well. Other then that, there is LocalAI which provides a better working API replacement.

psirdev commented 1 year ago

How different are the other API's in compare to OpenAI?

I think we can try to decouple the code from OpenAI requirements.txt.

gitihobo commented 1 year ago

Awesome thank you r7l

r7l commented 1 year ago

@psirdev There isn't much (if anything at all) that would beat OpenAI currently.

The main issue i had when playing around with gpt4free was that it's basically sliding into non official APIs and those might break from one day to the next. And those APIs in the list are not made for the same goals. The results might vary greatly from one API to the other while you can mostly guess why since it's not always documented.

I would prefer local LLMs for that matter anyways but so far the limitation was the context size. Most of the models where limited to 2k while OpenAI offers 32k. But that is about to change right now as someone developed landmark-attention-qlora which seems to allow you to expand the context size to whatever your hardware will take.

But even so, OpenAI with the official API will always give you better and more stable results. At least for now. The benefit in using local LLMs would be privacy and a way to not end up with a huge API bill by accident, just because your agent decided to write a new operating system from scratch.

patillacode commented 1 year ago

Seems like OP got an accepted answer, closing in favour of keeping the issues to a minimum.

jjhw commented 1 year ago

Look at this coding model -

https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GPTQ

Here is a video showing how well it codes -

https://www.youtube.com/watch?v=XjsyHrmd3Xo

EDIT: Here is a post on Reddit too -

https://old.reddit.com/r/LocalLLaMA/comments/14b1tsw/wizardcoder15b10_vs_chatgpt_coding_showdown_4/

You will also need a model that can work out what type of classes / code to ask the coding model for in the first place.

gitihobo commented 1 year ago

Super interesting, will look into it and how to implement it inside gpt-engineer

jjhw commented 1 year ago

A system that can work out what type of classes / code to ask the coding model for in the first place, how about using this - https://github.com/TransformerOptimus/SuperAGI