Closed takiholadi closed 1 year ago
I think openAI is trying to shut this down again, I'm switching over to GPT-3 temporarily
Same here, happened just now
Same here as well. Possible change of the model name by OpenAI?
Yup dead again :(
same here
this isn't good, I just tried using @canfam's code to brute force test thousands of models and none of them worked
I've tested all dates (if that's what it was) from 2020 all through to 2024 and no valid models on the text-devinci-002 so not sure what to make of that :(
same with davinci-001
try:
export GPT_ENGINE=text-davinci-003
and
pip3 install tiktoken
try:
export GPT_ENGINE=text-davinci-003
and
pip3 install tiktoken
Obviously it should work. But then you have to pay for it. The cost would probably be high, obviously substantially higher than chatgpt plus if you use it daily.
I've been running with the theory that maybe these could be a possibility for the one running on the website. I've not attempted a brute-force method yet but if anyone wants to try them feel free.
Note: 'date' is for the most recent update from the website displayed at the bottom.
Not work for me((
I've been running with the theory that maybe these could be a possibility for the one running on the website. I've not attempted a brute-force method yet but if anyone wants to try them feel free.
Note: 'date' is for the most recent update from the website displayed at the bottom.
- "text-chat-openai-gpt-20230130"
- "openai-text-chat-gpt-20230130"
- "gpt-text-chat-openai-20230130"
- "text-openai-chatgpt-20230130"
- "openai-chatgpt-text-20230130"
- "chatgpt-openai-text-20230130"
- "openai-gpt-text-chat-20230130"
- "gpt-openai-text-chat-20230130"
- "chat-openai-gpt-text-20230130"
- "text-chat-openai-gpt-model-20230130"
- "openai-text-chat-gpt-version-20230130"
- "chatgpt-openai-text-model-20230130"
- "bing-text-chat-microsoft"
- "microsoft-bing-text-chat"
- "text-microsoft-bing-chat"
- "text-chat-microsoft-bing"
azure
Yes. Currently running extensive brute force
Yes. Currently running extensive brute force
Thank you for all you do!
import openai
import concurrent.futures
import os
from revChatGPT.Official import Chatbot
defaultModel = "chat-davinci-002-XXXXXXXX"
successfulModels = []
def test_model(model):
try:
bot = Chatbot(api_key="...", engine=model)
response = bot.ask("say this is a test")
if response:
successfulModels.append(model)
except openai.InvalidRequestError:
print("Model [" + model + "] failed to load. (Invalid Request Error)")
with concurrent.futures.ThreadPoolExecutor(max_workers=16) as executor:
for year in range(2022, 2024):
for month in range(1, 13):
for day in range(1, 32):
formatted_year = "{:04d}".format(year)
formatted_month = "{:02d}".format(month)
formatted_day = "{:02d}".format(day)
model = defaultModel.replace("XXXXXXXX", formatted_year + formatted_month + formatted_day)
executor.submit(test_model, model)
print("Successful Models: " + str(successfulModels))
Can tweak this a bit to brute force things other than date
Currently no date available. I'm assuming they changed the naming scheme
azure
Where on Azure is this?
Good god. Good luck on your effort.
Where on Azure is this?
For people that bought the OpenAI service
Not having any luck brute forcing. I'll overhaul the browser version first
Where on Azure is this?
For people that bought the OpenAI service
What does the screenshot mean exactly? Is that the prefix?
Seems to be the model name used internally
We don't have access to it
Any ideas?Does anyone have ChatGPT Plus account?
Any ideas?Does anyone have ChatGPT Plus account?
@acheong08 : "I'm overhauling the browser based version and working towards a proxy service to remove the need for a browser."
@ShengdingHu
make a change in Official.py
ENGINE = os.environ.get("GPT_ENGINE") or "text-davinci-003" but it will charge the fee
ENGINE = os.environ.get("GPT_ENGINE") or "text-davinci-003"
work for me
import openai import concurrent.futures import os from revChatGPT.Official import Chatbot defaultModel = "chat-davinci-002-XXXXXXXX" successfulModels = [] def test_model(model): try: bot = Chatbot(api_key="...", engine=model) response = bot.ask("say this is a test") if response: successfulModels.append(model) except openai.InvalidRequestError: print("Model [" + model + "] failed to load. (Invalid Request Error)") with concurrent.futures.ThreadPoolExecutor(max_workers=16) as executor: for year in range(2022, 2024): for month in range(1, 13): for day in range(1, 32): formatted_year = "{:04d}".format(year) formatted_month = "{:02d}".format(month) formatted_day = "{:02d}".format(day) model = defaultModel.replace("XXXXXXXX", formatted_year + formatted_month + formatted_day) executor.submit(test_model, model) print("Successful Models: " + str(successfulModels))
Can tweak this a bit to brute force things other than date
Successful Models: [] 🤣
try:
export GPT_ENGINE=text-davinci-003
and
pip3 install tiktoken
if we use text-davinci-003 this code is wrong.
if response["choices"][0]["text"] == "<|im_end|>":
break
Openai does not return by the whole word, but is divided into different parts
<|im
_
end
|
>
It is easy for Openai to not put ChatGPT model onto API, or add some admission mechanism to it, so potentially difficult to use it again? But thanks @acheong08 anyways, we have experienced it for some time.
ENGINE = os.environ.get("GPT_ENGINE") or "text-davinci-003"
work for me
work for me too
can you elaborate it more and what is the solution thank you
try:
export GPT_ENGINE=text-davinci-003
and
pip3 install tiktoken
if we use text-davinci-003 this code is wrong.
if response["choices"][0]["text"] == "<|im_end|>": break
Openai does not return by the whole word, but is divided into different parts
<|im _ end | >
尝试:
export GPT_ENGINE=text-davinci-003
和
pip3 install tiktoken
It works for me
Just be careful when using text-davinci-003
since it's not a free model as chatGPT. You might face with an unexpected billing if there's a payment method defined. Specially without a hard limit
I tried all the models and they didn't work
maybe maybe those who has chatgpt plus and bruteforce its model and provide to everyone its api. just maybe what you think guys? hmmm
import openai import concurrent.futures import os from revChatGPT.Official import Chatbot defaultModel = "chat-davinci-002-XXXXXXXX" successfulModels = [] def test_model(model): try: bot = Chatbot(api_key="...", engine=model) response = bot.ask("say this is a test") if response: successfulModels.append(model) except openai.InvalidRequestError: print("Model [" + model + "] failed to load. (Invalid Request Error)") with concurrent.futures.ThreadPoolExecutor(max_workers=16) as executor: for year in range(2022, 2024): for month in range(1, 13): for day in range(1, 32): formatted_year = "{:04d}".format(year) formatted_month = "{:02d}".format(month) formatted_day = "{:02d}".format(day) model = defaultModel.replace("XXXXXXXX", formatted_year + formatted_month + formatted_day) executor.submit(test_model, model) print("Successful Models: " + str(successfulModels))
Can tweak this a bit to brute force things other than date
Successful Models: [] 🤣
Successful Models: []
@txtspam My temporary solution is
if (response["choices"][0]["text"] == "<|im_end|>"
or response["choices"][0]["text"].strip() == "<|im"):
break
maybe maybe those who has chatgpt plus and bruteforce its model and provide to everyone its api. just maybe what you think guys? hmmm
There is a chatgpt plus model available, or a paid trial
maybe maybe those who has chatgpt plus and bruteforce its model and provide to everyone its api. just maybe what you think guys? hmmm
There is a chatgpt plus model available, or a paid trial
yeah but chatgpt plust is different from the trial of openai. it is unlimited if you get base mode from just like the leak that has been shutdown lately... maybe if those person who has a chatgpt plus and someone bruteforce its model and he gives its api access but just maybe we dont know if it is worked like that lol..
heres the temporalily working Official.py solution but it costs you:
from
ENGINE = os.environ.get("GPT_ENGINE") or "text-chat-davinci-002-20221122"
to
ENGINE = os.environ.get("GPT_ENGINE") or "text-davinci-003"
from
ENCODER = tiktoken.get_encoding("gpt2")
to
ENCODER = tiktoken.get_encoding("p50k_base")
from
if response["choices"][0]["text"] == "<|im_end|>": break
to
if (response["choices"][0]["text"].strip() == "<|im_end|>" or response["choices"][0]["text"].strip() == "<|im"): break
hopes it helps for a while
This is worked:
from revChatGPT.Official import Chatbot
client = Chatbot('your-openai-api', engine='text-davinci-003')
resp = client.ask("What is chatGPT version?")
print(resp)
But, costs credit with engine='text-davinci-003' Be careful.
Model works again
Maintenance from OpenAI
Maintenance from OpenAI
thank you for the effort much love <3
Naisu.
Hi everyone,
I saw that the model works again, but I am experiencing the same error over and over again:
from revChatGPT.Official import Chatbot
chatbot = Chatbot(api_key=xxxxxxxxxxxxxxxxxxxxxxxxxxx)
response = chatbot.ask("Say hi", temperature=0)
response = response['choices'][0]['text']+'\n'
print(response)
InvalidRequestError: That model does not exist
Again this error