Open alextrzyna opened 1 year ago
Agree there is room for improvement in retrying/feeding error messages back into the model. Inviting the community to contribute PRs โ itโs out of scope for what I wanted to build personally.
Maybe GPT-5 is good enough as to not hallucinate variable names? ๐
Although openai has now made the code interpreter available to all plus users, the project is still very cool, I have a question is it as powerful as the official plugin
what kind of work would need to be done to run this on say a local llm with ooba ( has an openai compatible api)
Working on this one. @ricklamers: I am preparing a pull request with a rough idea in https://github.com/dasmy/gpt-code-ui/tree/dev/conversation_history. Then we can discuss if and how my approach fits into the overall picture.
@dasmy I have two ideas in mind:
โ ๏ธ Please check that this feature request hasn't been suggested before.
๐ Feature description
First of all, really cool project! I found gpt-code-ui when looking for an alternative to Code Interpreter/Notable that I could run locally.
I have noticed that gpt-code-ui is not quite as resilient to mistakes that it makes when generating code, specifically when compared to something like ChatGPT + Noteable plugin. For example, if gpt-code-ui makes a mistaken assumption about the name of a dataframe row in code that it generates, execution will fail and it will give up, whereas in the Noteable scenario ChatGPT is more likely to proactively inspect the results and attempt to fix it.
โ๏ธ Solution
Instead of just outputting the errors associated with a failed execution, proactively inspect the error and attempt a fix/re-run.
โ Alternatives
No response
๐ Additional Context
No response
Acknowledgements