Open dsandua opened 1 year ago
Reduce the Chapters, or change to gpt-3.5-turbo-16k
Do you have a payment plan on platform.openai.com?
I changed my chapters to 15 max. if you try to set the chapters to 20, you will get some errors, it has to do with the rate limits:
gpt-3.5-turbo | 3.500 | 90.000
gpt-3.5-turbo-0301 | 3.500 | 90.000 gpt-3.5-turbo-0613 | 3.000 | 250.000 gpt-3.5-turbo-16k | 3.500 | 180.000 gpt-3.5-turbo-16k-0613 | 3.500 | 180.000
Reduce the Chapters, or change to gpt-3.5-turbo-16k
Ok, I'll try it out
Do you have a payment plan on platform.openai.com?
Yes, I do.
OK, thank you so much
Switched to 10 chapters and used gpt-3.5-turbo-16k, and it finally completed... and did not output a .epub. Was running from command line python instead of jupyter, is that the issue?
I ran into this same issue (and reducing the chapters to 15 solved it) - is there a way to not have this limitation? I have an openai account (paid), but apparently don't have the gpt-4-0613 - I had to use the gpt-3.5-turbo-16k-0613 to get it to work. Is there an easy way to get the gpt4 access that I'm just not seeing on my paid account? Thanks!
On the left side, u will see a map, click on that map, and there u will find the epub, u must right click and download the epub to ur machine
gpt-4-0613
I just found this article about it https://tech.co/news/chatgpt-openai-update-price-reduction
We are on the waiting list i think
This thread saved me thanks everyone for posting.
Using 3.5, ran into this:
InvalidRequestError: This model's maximum context length is 16385 tokens. However, your messages resulted in 16394 tokens. Please reduce the length of the messages.
Is there any way to throttle the code to avoid this?
Using 3.5, ran into this:
InvalidRequestError: This model's maximum context length is 16385 tokens. However, your messages resulted in 16394 tokens. Please reduce the length of the messages.
Is there any way to throttle the code to avoid this?
How many chapters did you put in the code?
Yes, then the code will work, until the chatgpt-4 issue is fixed, u need to make a novel with 15 chapters max. maybe you could try 16 if it will work for you. if not, change back to 15 chapters
Thank you!
I got this problem when generating the chapter 9 :( so I guess I should reduce the amount of chapters to 8?
I got this problem when generating the chapter 9 :( so I guess I should reduce the amount of chapters to 8?
How many chapters did you put in the code?
Same issues as above. gpt-4 can complete about 8 chapters. About to try 15 chapters with gpt-3.5-turbo-16k-0613. would wayrather use gpt-4, of course!
Same issues as above. gpt-4 can complete about 8 chapters. About to try 15 chapters with gpt-3.5-turbo-16k-0613. would wayrather use gpt-4, of course!
Style appears to be a big factor affecting chapters. If you change the style to something more narrative it will drop the number of chapters and I assume there are many more combinations that can drop it further. Plenty of testing and playing to do. It's early days.
Same issues as above. gpt-4 can complete about 8 chapters. About to try 15 chapters with gpt-3.5-turbo-16k-0613. would wayrather use gpt-4, of course! Yes, me to, i am on the waiting list to get Gpt-4 api keys, for now i am using gpt-3.5-turbo-16k, and the max output is 16 chapters
I use to replace all model lines with global my_model = "gpt-3.5-turbo-16k". and now it can produce 17 chapters if lucky :)
Output exceeds the size limit. Open the full output data in a text editor step cost: 0.0037365 step cost: 0.003919 step cost: 0.0022845 step cost: 0.0011350000000000002 Generating storyline with chapters and high-level details... step cost: 0.0023225 step cost: 0.00244 step cost: 0.0026709999999999998 step cost: 0.0026709999999999998 Writing chapter 2... Output for prompt "write_chapter_1675" has been written to prompts/write_chapter_1675.txt
step cost: 0.005482 Writing chapter 3... Output for prompt "write_chapter_990" has been written to prompts/write_chapter_990.txt
step cost: 0.006755499999999999 Writing chapter 4... Output for prompt "write_chapter_1076" has been written to prompts/write_chapter_1076.txt
step cost: 0.0081215 Writing chapter 5... Output for prompt "write_chapter_114" has been written to prompts/write_chapter_114.txt
step cost: 0.0098535 ... Writing chapter 17... Output for prompt "write_chapter_639" has been written to prompts/write_chapter_639.txt
step cost: 0.024923
https://drive.google.com/file/d/1vpPESCCyhZDu_N0D9NiTxCQY4PwShtR2/view?usp=sharing
kinda maybe not even bad
I got this problem when generating the chapter 9 :( so I guess I should reduce the amount of chapters to 8?
How many chapters did you put in the code?
I put 20 chapters.
I got this problem when generating the chapter 9 :( so I guess I should reduce the amount of chapters to 8?
How many chapters did you put in the code?
I put 20 chapters.
try 16
I got this problem when generating the chapter 9 :( so I guess I should reduce the amount of chapters to 8?
How many chapters did you put in the code?
I put 20 chapters.
try 16
Cheers mate, I will try this amount.
InvalidRequestError: The model: gpt-4-0613
does not exist
Please advise do i have to replace with "gpt-3.5-turbo-16k" in all lines in code which contain gpt-4-0613
?
InvalidRequestError: The model:
gpt-4-0613
does not existPlease advise do i have to replace with "gpt-3.5-turbo-16k" in all lines in code which contain
gpt-4-0613
?
why don't you use the edited version from AgimaFR? then you don't have to change the code here it is: https://colab.research.google.com/drive/14kGquPkyfQXjPJnfz7hpixGA2j2FKMYW?usp=sharing
Even with that, i keep getting the error starting from chapter 10 (and i have GPT-4 Beta using the gpt-4-0613)
InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8198 tokens. Please reduce the length of the messages.
maybe it is wise to add length checking before sending a request...
So how is it going guys? Did anyone get what you want?
gpt-4-0613
I just found this article about it https://tech.co/news/chatgpt-openai-update-price-reduction
We are on the waiting list i think
I read the same. However, according to OpenAI's help article, "For API accounts created after August 18, 2023, you can get instant access to GPT-4 after purchasing $0.50 worth or more of pre-paid credits." (John, 2023). I tested this by creating a new account using my main phone number (no trial credits), then going and purchasing pre-paid credits ($5), creating an API, and then adding it. I removed any references to gpt-4-32k and replaced them with gpt-4. It works to get past the I don't have access to GPT-4 crap.
Reference John, J. (2023). How can I access GPT-4?. OpenAI Help Center. https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4
This error occurs at the beginning of chapter 10
InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8214 tokens. Please reduce the length of the messages.