Closed emsi closed 1 year ago
How frequently is this happening?
I've had this issue before early on in development. I think this can be addressed a few ways:
description_for_model
I'll look into each of these, though I think #2 is addressed in an upcoming update.
1. ensuring there is a correct example for that endpoint in the OpenAPI spec
It happens to me pretty much every time. I think it started after https://github.com/iamgreggarcia/codesherpa/pull/19 but I might be wrong.
2. updating the error handling such that the response is more informative for the model (it will usually self-correct and retry the request)
The worst part is that it does exactly the same like many times until it finally says that there was a technical error and it cannot continue. It looks like: Each of those requests end with:
ApiSyntaxError: Could not parse API call kwargs as JSON: exception=Invalid control character at: line 3 column 10 (char 33) url=http://localhost:3333/repl
I think the error message is not informative enough for the model to figure out what he (or it... I'm still confused by that ;) is doing wrong so he just tries again (pure madness).
I'll look into each of these, though I think #2 is addressed in an upcoming update.
Thank you. What do you mean by "upcoming update"? Is it somewhere in GitHub? Could I give it a try? I'm pretty comfortable with git.
What do you mean by "upcoming update"? Is it somewhere in GitHub? Could I give it a try? I'm pretty comfortable with git.
Of course! You can pull the test
version of the docker image here: ghcr.io/iamgreggarcia/codesherpa:test, which corresponds to the test
branch you can find here: codesherpa/tree/test
I tested it out and it's working well again. In short, I added some response models to API endpoints that are now automatically incorporated into the OpenAPI spec. The API spec is looking much better.
Let me know if you run into any issues or the same issue is persisting.
The worst part is that it does exactly the same like many times until it finally says that there was a technical error and it cannot continue. It looks like:
I've encountered this in the past. The model should dynamically back off when it gets 429s/500s in a short period of time. Adding rate-limiting to our end could also help.
Unfortunately I keep getting the same problem with test branch. Weirdly enough adding Make sure to use \n instead of new line character inside "code": request.
at the prompt end seems to suffice :)
Unfortunately for some reason the the plot is not working as the result is empty :(
Let's try again. I've added even more explicit instructions for the model and have been able to get it to create and embed visualizations. The problem in your latest screenshot is the model did not save the visualization, but rather caled the .show()
method. To see images with the plugin, however, the model must embed an image file in it's response markdown. The updated description_for_model
should fix this.
Download the latest image (sha256:24b314274a3e4e66175afff8d71782ea82936cd0397a0f67679dcbb93c0ba346
) with the test tag here: ghcr.io/iamgreggarcia/codesherpa:test.
When running the API, you can see the updated description_for_model
in the swagger UI at localhost:3333/docs.
Here's an example of a visualization embedded in the response (from a little bit ago):
Note the last bit of code:
# Save the plot
generated_plot_path = 'static/images/correlation_heatmap.png'
plt.savefig(generated_plot_path)
The .savefig
is what we want 😃
Hope that helps!
Thank you for your effort! Now it worked off the bat! GPT even figured it needs to use /command
to install missing package.
Oddly enough it still insisted on plt.show()
though but that's hardly an issue :)
# Save the plot
plt.savefig('static/images/venn_diagram.png')
# Display the plot
plt.show()
Awesome! Glad it seems to be working again. I've merged the changes into main. The latest
tagged image now has the changes.
Recently quite often I get this issue where ChatGPT tries to send a query with code spanning multiple lines.
That's weird because not long time ago it used to work flawlessly. Even if the code was first displayed with formatting it was passed to API in the correct format.
Not shure it it can be addressed with prompt or perhaps we could make the API resilient to that?