The plugin fails at indexing rendering it unusable. Chunks seem to be too big.
{
"error": {
"message": "This model's maximum context length is 8192 tokens, however you requested 9663 tokens (9663 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
The plugin fails at indexing rendering it unusable. Chunks seem to be too big.