Open IbragimovDP opened 11 months ago
Thinking out loud: some options
Just idea: Add tokens_value property for message object and send to gpt all last messages that fitted to context window. But in that case we'll have some glitches, when CBT forgets chat's beginning.
When I have a long conversation I start to receive error 400 probably because of limits of context window. I think so because when I change model from 3.5 with 4k window to 4 with 128k window I can continue my conversation.