Closed vio1ator closed 1 year ago
So far so good. Using GPT-3.5 greatly improved the output.
Oh hell yea. I'll incorporate this into the codebase today
Note that the check for using too many tokens isn't correct for 3.5. I'm working on a patch.
Actor c4cc41ce fatal error: [400:invalid_request_error] This model's maximum context Setting up libpython3.10-dev:arm64 (3.10.6-1~22.04.2ubuntu1) ... length is 4097 tokens. However, your messages resulted in 5382 tokens. Please reduce the Setting up python3-pip (22.0.2+dfsg-1ubuntu0.2) ... length of the messages. Setting up python3.10-dev (3.10.6-1~22.04.2ubuntu1) ... c4cc41ce: cleaning up container
Completed this @ttulttul- sorry if I preempted you
No worries!
I've asked GPT-4 to modify the genDialogue() function to use gpt-3.5-turbo instead. Seems to work fine for me. I'm not familiar with Go can someone take a look and see if it looks good?