mshumer / gpt-author

MIT License
2.45k stars 354 forks source link

The model: `gpt-4-0613` does not exist #1

Open botolo opened 1 year ago

botolo commented 1 year ago

I am getting this error when launching the script on Colab.

iwanikhalid commented 1 year ago

Facing the same issue.

@mshumer Is the API key obtained from openai.com for GPT-4 or you need to be on the waiting list?

mshumer commented 1 year ago

You need access to GPT-4 to use this model. If you don't yet have access, you can replace all instances of that model with gpt-3.5-turbo-16k.

botolo commented 1 year ago

I have GPT-4 but I am still getting this error. On Jun 20, 2023, at 5:40 PM, mshumer @.***> wrote: You need access to GPT-4 to use this model. If you don't yet have access, you can replace all instances of that model with gpt-3.5-turbo-16k.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>

mshumer commented 1 year ago

Hmm, that's really odd. In that case, try switching it out for gpt-4 and let me know if you experience any issues!

iwanikhalid commented 1 year ago

Thanks @mshumer mine works after changing the code to GPT-3. The next error that pops up

RateLimitError: You exceeded your current quota, please check your plan and billing details.

Do I need to upgrade my openAI API account?

mshumer commented 1 year ago

You may need higher rate limits. One thing you can do in the meantime is add minute-long timers that pause between each OpenAI call so they don't trigger the rate limit.

If you do this, please update the repo!

pogic commented 1 year ago

You need to apply for the OpenAI GPT-4 beta waitlist

AgimaFR commented 1 year ago

I modified the Colab file to define a global variable my_model = "gpt-3.5-turbo-16k" and made the replacements to call my new variable wherever there was model="gpt-4-0613" (print also) and it works fine. I'm currently modifying the program to make it possible to set the novel style (fantasy, futuristic, anticipation....). And I'm going to add automatic translation afterwards so that I can choose the language (English, French...) to see what it does.

mshumer commented 1 year ago

That's awesome! Can you update the repo with your code?

snip3r009 commented 1 year ago

I modified the Colab file to define a global variable my_model = "gpt-3.5-turbo-16k" and made the replacements to call my new variable wherever there was model="gpt-4-0613" (print also) and it works fine. I'm currently modifying the program to make it possible to set the novel style (fantasy, futuristic, anticipation....). And I'm going to add automatic translation afterwards so that I can choose the language (English, French...) to see what it does.

Nice job! I can't wait to see the result.

jaydeflix commented 1 year ago

Even after switching to GPT 3, I'm getting the error still but AFTER Generating storyline with chapters and high-level details... and three more step notifications. Guessing it's a time out/server side issue and I need to introduce some delays?

snip3r009 commented 1 year ago

Even after switching to GPT 3, I'm getting the error still but AFTER Generating storyline with chapters and high-level details... and three more step notifications. Guessing it's a time out/server side issue and I need to introduce some delays?

make a screenshot of the error

AgimaFR commented 1 year ago

Here's a link to my Colab notebook, which has been quickly modified to incorporate the following changes:

  1. Add the following parameters to the program:
    • Choice of gpt-3.5-turbo-16k or gpt-4-0613 for those without access to GPT-4 (GPT-4 is currently in a limited beta)
    • Novel Style so that you can request novels other than fantasy (anticipation, romance...)
    • author = "GPT-Author"to easily change the name of the book's author
    • destLanguage to indicate the desired language code for the book
  2. The main parameters are centralized in a form at the start of the program.
  3. Using Google Translate for translations
  4. Add --quiet parameter to !pip install... command

Link to my modified Colab notebook @mshumer It's up to you to decide what you want to include in your GPT-Author project.

⚠️ The translation into French causes an error that I haven't corrected yet (perhaps because of the accents and encoding 🧐 to be seen).

snip3r009 commented 1 year ago

Here's a link to my Colab notebook, which has been quickly modified to incorporate the following changes:

  1. Add the following parameters to the program:
  • Choice of gpt-3.5-turbo-16k or gpt-4-0613 for those without access to GPT-4 (GPT-4 is currently in a limited beta)
  • Novel Style so that you can request novels other than fantasy (anticipation, romance...)
  • author = "GPT-Author"to easily change the name of the book's author
  • destLanguage to indicate the desired language code for the book
  1. The main parameters are centralized in a form at the start of the program.
  2. Using Google Translate for translations
  3. Add --quiet parameter to !pip install... command

Link to my modified Colab notebook @mshumer It's up to you to decide what you want to include in your GPT-Author project.

⚠️ The translation into French causes an error that I haven't corrected yet (perhaps because of the accents and encoding 🧐 to be seen).

Ahh, Thnxz my friend!! Thnxz for posting. After i changed the language to Dutch (nl) it also gives me erros, so i changed it back to English. But, if i do Novelstyle= Espionage, and i see on the last cell this: novel, title, chapters, chapter_titles = write_fantasy_novel(prompt, num_chapters, writing_style) (write_fantasy_novel) will it still make a fantasy novel?

AgimaFR commented 1 year ago

@snip3r009

andydivers commented 1 year ago

How much open ai will charge for 15 chapters if model is 3.5?

snip3r009 commented 1 year ago

How much open ai will charge for 15 chapters if model is 3.5?

around $2 i think, not much

andydivers commented 1 year ago

How much open ai will charge for 15 chapters if model is 3.5?

around $2 i think, not much

Ok, just wondering how to set up add minute-long timers that pause between each OpenAI call correct way so they wont overcharge me:)

andydivers commented 1 year ago

Oh, ok. Thanks!

snip3r009 commented 1 year ago

add minute-long timers that pause between each OpenAI call

`import time import openai

Set the OpenAI API key

openai.api_key = 'YOUR_API_KEY'

Your code logic

prompt = "Your prompt goes here."

Make the first API call

response = openai.ChatCompletion.create( model="gpt-3.5-turbo-16k", messages=[ {"role": "system", "content": "system message"}, {"role": "user", "content": prompt} ] )

Process the response or perform any required operations

processed_response = process_response(response)

Pause for a minute before making the next API call

time.sleep(60)

Make the second API call

response = openai.ChatCompletion.create( model="gpt-3.5-turbo-16k", messages=[ {"role": "system", "content": "system message"}, {"role": "user", "content": prompt} ] )

Continue with the rest of your code

`

jaydeflix commented 1 year ago

Even after switching to GPT 3, I'm getting the error still but AFTER Generating storyline with chapters and high-level details... and three more step notifications. Guessing it's a time out/server side issue and I need to introduce some delays?

make a screenshot of the error

step cost: 0.003048 step cost: 0.004698 step cost: 0.004253 step cost: 0.0020550000000000004 Generating storyline with chapters and high-level details... step cost: 0.003908 step cost: 0.004938 step cost: 0.007009

InvalidRequestError Traceback (most recent call last)

in () 3 num_chapters = 10 4 writing_style = "Clear and easily understandable, similar to a young adult novel. Highly descriptive and sometimes long-winded." ----> 5 novel, title, chapters, chapter_titles = write_fantasy_novel(prompt, num_chapters, writing_style) 6 7 # Replace chapter descriptions with body text in chapter_titles 6 frames /usr/local/lib/python3.10/dist-packages/openai/api_requestor.py in _interpret_response_line(self, rbody, rcode, rheaders, stream) 761 stream_error = stream and "error" in resp.data 762 if stream_error or not 200 <= rcode < 300: --> 763 raise self.handle_error_response( 764 rbody, rcode, resp.data, rheaders, stream_error=stream_error 765 ) InvalidRequestError: The model `gpt-3.5-turbo-16k-32k-0613` does not exist
snip3r009 commented 1 year ago

InvalidRequestError: The model gpt-3.5-turbo-16k-32k-0613 does not exist

try this one gpt-3.5-turbo