Open adriatic opened 10 months ago
Which model are you using by default?
I followed the Quickstart verbatim, meaning that the model is Github. The only "personal" choice I made was to select several fields from Github. I do not remember the setting I did at Github to allow this sample to access Github.
I tried to run that same instance, with the prompt What questions can I ask
got back bullet points and I asked next how to review and approve a pull request
resulting with
Got response from lookUpGitHubKnowledgeBase
:
AI.JSX(2004): Fixie API call to https://api.fixie.ai/api/v1/corpora/286b5a7d-2bcd-483f-aef5-acf157c5aea5:query returned status 500: .
This is a runtime error that's expected to occur with some frequency. It may go away on retry. It may be made more likely by errors in your code, or in AI.JSX.
Need help?
* Discord: https://discord.com/channels/1065011484125569147/1121125525142904862
* Docs: https://docs.ai-jsx.com/
* GH: https://github.com/fixie-ai/ai-jsx/issues
Note that I am harping on this issue, because it is possible that I found bug 😄
I think we have partially addressed some of the confusion that we were creating in the Quickstart with this PR that spells out the various types of docs collections and explains that there is a public collection for Git/GitHub.
Have you still been seeing the error WRT max tokens?
No, I did not try anything else - and will try for more tomorrow.
When will this fix be "live"? Is it already? (I always have such questions because there is no information about "fixes in the current code and docs")
I deployed (to the cloud) the sample https://docs.ai-jsx.com/sidekicks/sidekicks-quickstart and asked the question
show me what can you help with
. This resulted with the error fromlookUpGitHubKnowledgeBase
:This model response had an error: "Error during generation: AI.JSX(1032): OpenAI API Error: 400 This model's maximum context length is 4097 tokens. However, your messages resulted in 6069 tokens (5970 in the messages, 99 in the functions). Please reduce the length of the messages or functions. It's unclear whether this was caused by a bug in AI.JSX, in your code, or is an expected runtime error.
I have no doubt that I exceeded my token limit - and am reporting it just to be safe (that I reported this possible bug)
Added later: Rerun with a different (but similar) question:
show me what can you help with
and this time everything went fine:Perhaps this LLM is too smart for me, as running it with the first question, that resulted with
Error - This model's maximum context length is 4097 tokens
now responded fine.Note: I am fascinated with the difference in answering first and second question. Debugging this seems like a nightmare 😄