Closed Sweetdevil144 closed 1 year ago
having the same issue bro, did you find out how to fix?
having the same issue bro, did you find out how to fix?
Not right now but I hope I'll soon find one. I thought using gpt-4-8k would solve the problem but it didn't even bulge and gave this error.
The model
gpt-4-8kdoes not exist
.
Then I tried to increase the token count by using gpt-3.5-turbo-16k which doubled it to 8k tokens.
I think one possible approach may lie in splitting the video first into multiple segments and then re-examining the videos. What do you think about this @GuilhermeSBlanco
The issue is with context length.
The longer the video the bigger the transcripts.
Ideally the transcript should be split in chunks and then processed one by one.
I will be rewriting the code with langchain soon.
If you are getting token limit exceeded try with smaller videos of 5 mins.
I have access to gpt3.5 16k and gpt4 8k.
do you guys know a way to access gpt4 without paying for it? i'm brazillian and it's kind expansive for me
do you guys know a way to access gpt4 without paying for it? i'm brazillian and it's kind expansive for me
No buddy, you can't. You can try gpt-3.5-16k turbo version to increase the token count.
raise self.handle_error_response( openai.error.InvalidRequestError: The model:
gpt-4
does not exist