Closed spencerwongfeilong closed 1 year ago
Hello, I did not see any error message in the above log information. As I currently do not have an available token for GPT4, I was unable to reproduce this issue.
I get that error too, but when using 3096 as the token limit. I'll try and grab a log of it the next time it happens.
Ok, so I've had a little test and there's not much showing in the logs. It happens with both 3.5 and 4 models.
However, in case it's helpful, here's a comparison between when I use 3096 tokens (Which now always throws the error ''3096' is not of type 'integer' - 'max_tokens'', vs using 3028 tokens which doesn't throw an error:
Client - 3096 tokens - ERROR = ''3096' is not of type 'integer' - 'max_tokens''
2023-04-01T15:10:24.306598003Z 192.168.1.4 - - [01/Apr/2023:15:10:24 +0000] "POST /api/conversation/ HTTP/1.1" 200 142 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "192.168.1.1"
Client - 3028 tokens - NO ERROR
2023-04-01T15:13:53.639992319Z 192.168.1.4 - - [01/Apr/2023:15:13:53 +0000] "POST /api/conversation/ HTTP/1.1" 200 500 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "192.168.1.1"
Web Server - 3096 tokens - ERROR= ''3096' is not of type 'integer' - 'max_tokens''
2023-04-01T15:10:24.305607351Z 192.168.208.4 - - [01/Apr/2023:15:10:24 +0000] "POST /api/conversation/ HTTP/1.0" 200 131 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "192.168.1.1, 192.168.1.4"
Web Server - 3028 tokens - NO ERROR
2023-04-01T15:13:53.639234277Z 192.168.208.4 - - [01/Apr/2023:15:13:53 +0000] "POST /api/conversation/ HTTP/1.0" 200 488 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "192.168.1.1, 192.168.1.4"
WSGI Server - 3096 tokens - ERROR= ''3096' is not of type 'integer' - 'max_tokens''
2023-04-01T15:10:24.302860969Z Warning: gpt-3.5-turbo may change over time. Returning num tokens assuming gpt-3.5-turbo-0301.
2023-04-01T15:10:24.303455586Z 192.168.208.3 - - [01/Apr/2023:15:10:24 +0000] "POST /api/conversation/ HTTP/1.0" 200 131 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
WSGI Server - 3028 tokens - NO ERROR
2023-04-01T15:13:53.636778031Z Warning: gpt-3.5-turbo may change over time. Returning num tokens assuming gpt-3.5-turbo-0301.
2023-04-01T15:13:53.637423849Z 192.168.208.3 - - [01/Apr/2023:15:13:53 +0000] "POST /api/conversation/ HTTP/1.0" 200 488 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
Ok, so I've had a little test and there's not much showing in the logs. It happens with both 3.5 and 4 models.
However, in case it's helpful, here's a comparison between when I use 3096 tokens (Which now always throws the error ''3096' is not of type 'integer' - 'max_tokens'', vs using 3028 tokens which doesn't throw an error:
Client - 3096 tokens - ERROR = ''3096' is not of type 'integer' - 'max_tokens''
2023-04-01T15:10:24.306598003Z 192.168.1.4 - - [01/Apr/2023:15:10:24 +0000] "POST /api/conversation/ HTTP/1.1" 200 142 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "192.168.1.1"
Client - 3028 tokens - NO ERROR
2023-04-01T15:13:53.639992319Z 192.168.1.4 - - [01/Apr/2023:15:13:53 +0000] "POST /api/conversation/ HTTP/1.1" 200 500 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "192.168.1.1"
Web Server - 3096 tokens - ERROR= ''3096' is not of type 'integer' - 'max_tokens''
2023-04-01T15:10:24.305607351Z 192.168.208.4 - - [01/Apr/2023:15:10:24 +0000] "POST /api/conversation/ HTTP/1.0" 200 131 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "192.168.1.1, 192.168.1.4"
Web Server - 3028 tokens - NO ERROR
2023-04-01T15:13:53.639234277Z 192.168.208.4 - - [01/Apr/2023:15:13:53 +0000] "POST /api/conversation/ HTTP/1.0" 200 488 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "192.168.1.1, 192.168.1.4"
WSGI Server - 3096 tokens - ERROR= ''3096' is not of type 'integer' - 'max_tokens''
2023-04-01T15:10:24.302860969Z Warning: gpt-3.5-turbo may change over time. Returning num tokens assuming gpt-3.5-turbo-0301. 2023-04-01T15:10:24.303455586Z 192.168.208.3 - - [01/Apr/2023:15:10:24 +0000] "POST /api/conversation/ HTTP/1.0" 200 131 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
WSGI Server - 3028 tokens - NO ERROR
2023-04-01T15:13:53.636778031Z Warning: gpt-3.5-turbo may change over time. Returning num tokens assuming gpt-3.5-turbo-0301. 2023-04-01T15:13:53.637423849Z 192.168.208.3 - - [01/Apr/2023:15:13:53 +0000] "POST /api/conversation/ HTTP/1.0" 200 488 "MYHOST" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
To replicate, I change the Max tokens from 1000 to 4000 and sent "Hi" to openai.
The error message will appear "4000 is not of type integer - Max tokens"
If I were to revert back to 1000 max tokens. The error will keep happening. Only way to fix it is to clear cache. This bug render the client unusable.
i like to point out that, sometimes, instead of "4000 is not of type integer - Max tokens", another error message appears.
this error message prompts that the token usage exceeds the max token.
Do u need logs for this?
Fixed. It was caused by:
Thanks for fixing string and integer!
The error for chat completion still remains.
For example if I set to 8000 tokens and ask chatgpt Hi, it will respond Max token exceed.
gpt-4?
gpt-4?
Hello
I will continue to monitor. Seems okay for now. And thank you for the good work. My friends will support u by buying more coffee =D
@WongSaang I'm using the latest pull, but I'm still getting the same error: ''3096' is not of type 'integer' - 'max_tokens'.
I have the default value for the max tokens (in the enums file) set to 3096, and this issue still occurs on both gpt3.5 or gpt4. However, I've noticed that if I just drop it on the front end to 3095, and then back to 3096, it works fine... So not sure what's going on there.
I get the above error message when i use chatgpt 4.0.
The logs below are taken from the WSGI server