Currently the inferencing experience is very straight forward and manual.
You enter a prompt and you either get working code that runs or you get an error.
If you want to debug this error you have to manually extract the generated script, as well as the stderr from the execution. and re-submit it with a new prompt asking the model to troubleshoot and provide a fully working [updated] version of the script.
It would be ideal to have a built-in mechanism that handles this for you through a button click.
This would allow for quickly debugging on the fly by automatically inferring what the error is from the stacktrace (if there is one) or requesting a brief explanation of the issue through prompting before attempting to debug.
Scripting vs Interactive Modes
Scripting
This is essentially the standard and only way to use the application currently and allows for simple a very simple chain:
PROMPT -> SCRIPT -> EXECUTION -> RESULT
Interactive
Under this mode the model would have access to various tools (through Langchain) that would allow it to perform various tasks such as opening file(s), summarizing, documenting, creating unit tests for code-- the applications are endless.
PROMPT -> STEPS -> SCRIPT -> EXECUTION -> STEPS -> SCRIPT -> STEPS -> SCRIPT...
Currently the inferencing experience is very straight forward and manual.
It would be ideal to have a built-in mechanism that handles this for you through a button click.
Scripting vs Interactive Modes
Scripting
PROMPT -> SCRIPT -> EXECUTION -> RESULT
Interactive
PROMPT -> STEPS -> SCRIPT -> EXECUTION -> STEPS -> SCRIPT -> STEPS -> SCRIPT...