gpt-engineer-org / gpt-engineer

Specify what you want it to build, the AI asks for clarification, and then builds it.
MIT License
51.43k stars 6.69k forks source link

adding a custom api end point #331

Closed BhagatHarsh closed 1 year ago

BhagatHarsh commented 1 year ago

I use a custom and free api endpoint so I would like to add a feature which is similar to:

export OPENAI_API_KEY=[your api key]

so I would like to have:

export OPENAI_API_BASE=[your custom api base url]

If you don't export it then it will have the default base url.

is it appropriate for me to work on this or if it is already implemented then please let me know.

mgrist commented 1 year ago

As far as I know, you need an Open AI API key to use this tool. You can get one for free at OpenAI Website. I am not sure what you mean about having a custom-free API endpoint.

BhagatHarsh commented 1 year ago

@mgrist thank you for replying There are ways to get around the paywall, there are people who provide free chat gpt api through reverse proxy servers, so the endpoint url is different but behaves similar to open ai, you can look further into it here repo

shubham-attri commented 1 year ago

@BhagatHarsh, I do get the point you are trying to make here, but trying to get away with proxy servers and getting around is unethical and unauthorized access to paid services or content is considered unethical and may violate terms of service or legal agreements. It's important to respect the rights and policies set by the service provider and let's maintain the integrity of the project. Ofc there can be support to run it on local machine or Azure services so that we can run the model locally and get around without paying for tokens.

BhagatHarsh commented 1 year ago

@shubham-attri completely agreed, that is why I asked before making a PR.

But is the feature hurting any policies here?

All I want is to add a way to not edit the code everytime but just change the api_base using export.

The way people wanna use it is at their discretion.

bsu3338 commented 1 year ago

Fastchat has an openAI API interface to opensource models. https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md

Also Helicone.ai required changing the api_base to use their product https://docs.helicone.ai/quickstart/integrate-in-one-minute

I am not supporting unethical use, but I do see use cases for adding api_base option. You would also have to allow the user to define their own model

fat-tire commented 1 year ago

Also see https://localai.io

Along these lines, I'm also wondering about the TERMS_OF_USE.md that doesn't seem to exist, and how this differs from the provisions outlined in the LICENSE.

@BhagatHarsh, I do get the point you are trying to make here, but trying to get away with proxy servers and getting around is unethical and unauthorized access to paid services or content is considered unethical and may violate terms of service or legal agreements. It's important to respect the rights and policies set by the service provider and let's maintain the integrity of the project. Ofc there can be support to run it on local machine or Azure services so that we can run the model locally and get around without paying for tokens.

jet-georgi-velev commented 1 year ago

Same problem here, it doesn't seem to consider OPENAI_API_BASE when running. You gotta edit ai.py in order to use a different instance of GPT instead of the Open AI one.

On the folks mentioning unethical practices and other "spooky" nonsense - Microsoft and others offer private instances of the OpenAI models where your data is private and not shared unlike when using OpenAI's API.

mgrist commented 1 year ago

@jet-georgi-velev I didn't think about other providers using OpenAI models, so this feature request seems pretty valid to me. Thanks for the insight!

JinchuLi2002 commented 1 year ago

@jet-georgi-velev I see this can be a useful feature especially for working with locally deployed LLms.

In fact, setting api base in environment should work by itself, except that the current version performs verification of model availability via OpenAI by default, which is not what we want if we're just "borrowing OpenAI's API" for local inference and not actually contacting OpenAI's service.

I see this issue has been around for a few days now, so I compiled a very short PR that should solve it.

jet-georgi-velev commented 1 year ago

@JinchuLi2002 I've checked your PR and this won't work. At least for the Azure implementation as it doesn't use model but it uses deployment_id. I've got a more sufficient patch but I'm away and I can submit it on Tuesday.

On Thu, 29 Jun 2023, 10:49 JinchuLi2002, @.***> wrote:

@jet-georgi-velev https://github.com/jet-georgi-velev I see this can be a useful feature especially for working with locally deployed LLms.

In fact, setting api base in environment should work by itself, except that the current version performs verification of model availability via OpenAI by default, which is not what we want if we're just "borrowing OpenAI's API" for local inference and not actually contacting OpenAI's service.

I see this issue has been around for a few days now, so I compiled a very short PR that should solve it.

— Reply to this email directly, view it on GitHub https://github.com/AntonOsika/gpt-engineer/issues/331#issuecomment-1612734102, or unsubscribe https://github.com/notifications/unsubscribe-auth/A4F5TBRHBEA5RBOFTHMV3KTXNVFT3ANCNFSM6AAAAAAZQMQ2JY . You are receiving this because you were mentioned.Message ID: @.***>

--

This email and any files transmitted with it contain confidential information and/or privileged or personal advice. This email is intended for the addressee(s) stated above only. If you are not the addressee of the email please do not copy or forward it or otherwise use it or any part of it in any form whatsoever. If you have received this email in error please notify the sender and remove the e-mail from your system. Thank you.

This is an email from the company Just Eat Takeaway.com N.V., a public limited liability company with corporate seat in Amsterdam, the Netherlands, and address at Piet Heinkade 61, 1019GM Amsterdam, registered with the Dutch Chamber of Commerce with number 08142836 and where the context requires, includes its subsidiaries and associated undertakings.

JinchuLi2002 commented 1 year ago

@jet-georgi-velev Hi, thanks for the reply! Yeah I should've made clear in my PR that it only for (non-Azure) OpenAI-compatible LLMs, as I am using local inference and had encountered the same issue as the OP.

SumitKumarDev10 commented 1 year ago

I think having a Base API is not that of a good idea. It violates their Terms Of Use.

The sharing of API keys is against the Terms of Use. As you begin experimenting, you may want to expand API access to your team. OpenAI does not support the sharing of API keys.

For More information, go to: Best Practices for API Key Safety | OpenAI Help Centre

JinchuLi2002 commented 1 year ago

@SumitKumarDev10 Hi Sumit, I think there's some misunderstanding here.

  1. It's not the sharing of API secret keys, but merely to add an option to send the query to a custom url (i.e. say you set up a local LLM on your own GPU, like FastChat, you can then make openai.ChatCompletion.create() send your query to http://localhost:8000 or wherever it's deployed), rather than the default api.openai.com/v1.
  2. The OpenAI API itself supports switching api end points by export OPENAI_API_BASE=, it's just that gpt-engineer had some bugs that blocks the proper use of it
SumitKumarDev10 commented 1 year ago

Thank You Jinchu for correcting me and my confusion. I never knew what an LLM was, so your knowledge and experience on these topic is quite fascinating atleast for a beginner like me.

On Fri, 30 Jun, 2023, 12:52 pm Jinchu Li, @.***> wrote:

@SumitKumarDev10 https://github.com/SumitKumarDev10 Hi Sumit, I think there's some misunderstanding here.

  1. It's not the sharing of API secret keys, but merely to add an option to send the query to a custom url (i.e. say you set up a local LLM on your own GPU, like FastChat, you can then make openai.ChatCompletion.create() send your query to http://localhost:8000 or wherever it's deployed).
  2. The OpenAI API itself supports switching api end points by export OPENAI_API_BASE=, it's just that gpt-engineer had some bugs that blocks the proper use of it

— Reply to this email directly, view it on GitHub https://github.com/AntonOsika/gpt-engineer/issues/331#issuecomment-1614240500, or unsubscribe https://github.com/notifications/unsubscribe-auth/BANRI7MERY4XXS6GPDZYBJTXNZ5ETANCNFSM6AAAAAAZQMQ2JY . You are receiving this because you were mentioned.Message ID: @.***>

noxiouscardiumdimidium commented 1 year ago

Thank You Jinchu for correcting me and my confusion. I never knew what an LLM was, so your knowledge and experience on these topic is quite fascinating atleast for a beginner like me. On Fri, 30 Jun, 2023, 12:52 pm Jinchu Li, @.> wrote: @SumitKumarDev10 https://github.com/SumitKumarDev10 Hi Sumit, I think there's some misunderstanding here. 1. It's not the sharing of API secret keys, but merely to add an option to send the query to a custom url (i.e. say you set up a local LLM on your own GPU, like FastChat, you can then make openai.ChatCompletion.create() send your query to http://localhost:8000 or wherever it's deployed). 2. The OpenAI API itself supports switching api end points by export OPENAI_API_BASE=, it's just that gpt-engineer had some bugs that blocks the proper use of it — Reply to this email directly, view it on GitHub <#331 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BANRI7MERY4XXS6GPDZYBJTXNZ5ETANCNFSM6AAAAAAZQMQ2JY . You are receiving this because you were mentioned.Message ID: @.>

yeah, when youre using the exports, local, youre using a fictitious hash you just MADE UP. if its allowed to be inplmented properly, all it does it verify you're allowing your own device to interface with another local port on your own machine. it CANNOT be used to access openai, or any other paid service - therefore they will never REQUIRE being shared with anyone/port/outside-machine not explicitly defined and allowed by it own end user, and owner. the default "api-key" for textgen... is "dummy" as in a valueless place-holder. as long as both instances have "dummy" set as the key, they can validate their connection. they have no monetary value, there's no point to trade them and zero harm if you do, because they have no value they are incapable of being used in any form of theft, misappropriation, or trading of digital credits as part of digital money laundering... every stipulation about sharing REAL keys in no way applies to an infinite supply of random characters... if i tell you i sometimes use I-AM-NUMBER-1, neither of us is capable of causing or suffering a legally actionable "quantifiable damage-in-fact" xD

SumitKumarDev10 commented 1 year ago

@noxiouscardiumdimidium I am sure you have written something valuable, interesting and fascinating but I am sorry because I am still as beginner and don't really know what you are talking about. Please don't take this reply offensively. It is just that I am being honest.

noxiouscardiumdimidium commented 1 year ago

@noxiouscardiumdimidium I am sure you have written something valuable, interesting and fascinating but I am sorry because I am still as beginner and don't really know what you are talking about. Please don't take this reply offensively. It is just that I am being honest.

i know, i made a clearer one. it s in discussions "Gpt-Engineer+Textgen". the point of the legal breakdown is that openai allows this, and the server and github rules ONLY apply to REAL keys with monetary value , not security passwords, which is what you're actually exporting. the ONLY thing openai asks to use their openai.api for local LLM support, is to confirm the end-user has given permission to access the local port... by both end-points exporting the the same key

SumitKumarDev10 commented 1 year ago

Ok, Thank You

On Mon, Jul 3, 2023 at 4:04 PM noxiouscardiumdimidium < @.***> wrote:

@noxiouscardiumdimidium https://github.com/noxiouscardiumdimidium I am sure you have written something valuable, interesting and fascinating but I am sorry because I am still as beginner and don't really know what you are talking about. Please don't take this reply offensively. It is just that I am being honest.

i know, i made a clearer one. it s in discussions "Gpt-Engineer+Textgen". the point of the legal breakdown is that openai allows this, and the server and github rules ONLY apply to REAL keys with monetary value , not security passwords, which is what you're actually exporting. the ONLY thing openai asks to use their openai.api for local LLM support, is to confirm the end-user has given permission to access the local port... by both end-points exporting the the same key

— Reply to this email directly, view it on GitHub https://github.com/AntonOsika/gpt-engineer/issues/331#issuecomment-1617867329, or unsubscribe https://github.com/notifications/unsubscribe-auth/BANRI7MCZ5XTWYKPNDXKEZ3XOKN23ANCNFSM6AAAAAAZQMQ2JY . You are receiving this because you were mentioned.Message ID: @.***>

AntonOsika commented 1 year ago

PR open for this, closing already to keep things tidy 🏃