fixie-ai / ai-jsx

The AI Application Framework for Javascript
https://docs.ai-jsx.com
MIT License
1.04k stars 78 forks source link

Potential bug in ai.jsx #420

Open adriatic opened 10 months ago

adriatic commented 10 months ago

I deployed (to the cloud) the sample https://docs.ai-jsx.com/sidekicks/sidekicks-quickstart and asked the question show me what can you help with. This resulted with the error from lookUpGitHubKnowledgeBase:

This model response had an error: "Error during generation: AI.JSX(1032): OpenAI API Error: 400 This model's maximum context length is 4097 tokens. However, your messages resulted in 6069 tokens (5970 in the messages, 99 in the functions). Please reduce the length of the messages or functions. It's unclear whether this was caused by a bug in AI.JSX, in your code, or is an expected runtime error.

I have no doubt that I exceeded my token limit - and am reporting it just to be safe (that I reported this possible bug)

Added later: Rerun with a different (but similar) question: show me what can you help with and this time everything went fine:

I can help you with various tasks related to Git and GitHub, including:

Providing guidance on Git and GitHub workflows.
Assisting with creating, cloning, and initializing repositories.
Explaining how to commit changes and manage branches in Git.
Guiding you through the process of creating and merging pull requests on GitHub.
Helping you resolve merge conflicts in Git.
Assisting with configuring and using Git remotes.
Explaining how to collaborate with others using Git and GitHub.
Providing information on Git and GitHub APIs and how to use them.
If you have any specific questions or need assistance with a particular task, please let me know, and I'll be glad to help you!

Perhaps this LLM is too smart for me, as running it with the first question, that resulted with Error - This model's maximum context length is 4097 tokens now responded fine.

Note: I am fascinated with the difference in answering first and second question. Debugging this seems like a nightmare 😄

zkoch commented 10 months ago

Which model are you using by default?

adriatic commented 10 months ago

I followed the Quickstart verbatim, meaning that the model is Github. The only "personal" choice I made was to select several fields from Github. I do not remember the setting I did at Github to allow this sample to access Github.

adriatic commented 10 months ago

I tried to run that same instance, with the prompt What questions can I ask

got back bullet points and I asked next how to review and approve a pull request

resulting with

Got response from lookUpGitHubKnowledgeBase:

AI.JSX(2004): Fixie API call to https://api.fixie.ai/api/v1/corpora/286b5a7d-2bcd-483f-aef5-acf157c5aea5:query returned status 500: .

This is a runtime error that's expected to occur with some frequency. It may go away on retry. It may be made more likely by errors in your code, or in AI.JSX.

Need help? 
* Discord: https://discord.com/channels/1065011484125569147/1121125525142904862
* Docs: https://docs.ai-jsx.com/
* GH: https://github.com/fixie-ai/ai-jsx/issues

Note that I am harping on this issue, because it is possible that I found bug 😄

benlower commented 10 months ago

I think we have partially addressed some of the confusion that we were creating in the Quickstart with this PR that spells out the various types of docs collections and explains that there is a public collection for Git/GitHub.

Have you still been seeing the error WRT max tokens?

adriatic commented 10 months ago

No, I did not try anything else - and will try for more tomorrow.

When will this fix be "live"? Is it already? (I always have such questions because there is no information about "fixes in the current code and docs")