Open 99Gens opened 4 months ago
Yes @99Gens I guess there need some kind of a validation when the Number of Token's exceeds but this could only happen at the run-time so it would be better to give some kind of a Toast Message because it totally depends on the model that we are using . Putting an error at the compile time can be done but would take a lot more effort and with the increasing context window not worth it.
litellm.BadRequestError: litellm.ContextWindowExceededError: AnthropicError - {"type":"error","error":{"type":"invalid_request_error","message":"prompt is too long: 209353 tokens > 199999 maximum"}}
Devon should know not to submit a prompt which exceeds x tokens.