Closed mishl-dev closed 1 year ago
I'm not sure why, but this command doesn't work when running in the terminal.
Idk man just use this
async def generate_response(prompt):
base_url = 'https://gpt4.gravityengine.cc/api/openai/'
error_base_url = 'https://askgpt.cn/api/openai/'
arguments = '/v1/engines/text-davinci-003/completions'
endpoint = base_url + arguments
headers = {
'Content-Type': 'application/json',
}
data = {
'prompt': prompt,
'max_tokens': 800,
'temperature': 0.8
}
try:
async with aiohttp.ClientSession() as session:
async with session.post(endpoint, headers=headers, json=data) as response:
response_data = await response.json()
return(response_data['choices'][0]['text'])
except aiohttp.ClientError as error:
print('Error making the request retrying with fallback model')
endpoint = error_base_url + arguments
async with aiohttp.ClientSession() as session:
async with session.post(endpoint, headers=headers, json=data) as response:
response_data = await response.json()
return(response_data['choices'][0]['text'])
I've edited your comment with the working code.
Idk man just use this
async def generate_response(prompt): base_url = 'https://gpt4.gravityengine.cc/api/openai/' error_base_url = 'https://askgpt.cn/api/openai/' arguments = '/v1/engines/text-davinci-003/completions' endpoint = base_url + arguments headers = { 'Content-Type': 'application/json', } data = { 'prompt': prompt, 'max_tokens': 800, 'temperature': 0.8 } try: async with aiohttp.ClientSession() as session: async with session.post(endpoint, headers=headers, json=data) as response: response_data = await response.json() return(response_data['choices'][0]['text']) except aiohttp.ClientError as error: print('Error making the request retrying with fallback model') endpoint = error_base_url + arguments async with aiohttp.ClientSession() as session: async with session.post(endpoint, headers=headers, json=data) as response: response_data = await response.json() return(response_data['choices'][0]['text'])
I got the terminal command working, and this will be useful for actually using it. Thanks!
Should I use the davinci model in the chatbot?
Should I use the davinci model in the chatbot?
Davinci model is kinda shit use text-davinci-003 instead
By the way, how do you keep finding all this stuff?
By the way, how do you keep finding all this stuff?
Thanks!
Added text-davinci-003
https://gpt4.gravityengine.cc/api/openai/v1/chat/completions
Request example for chat style models
And for normal models: