It would be amazing if please-cli could be enhanced with context information.
I just did this, and please didn't know anything about the fact that its previous invokation errored:
please-cli on fix-case in ☸ hetzner (prev-main) on ☁️ (us-east-1)
❌1 ❯ please do something fun, but make sure first that the tool you use is actually installed on my system
💡 Command:
command -v cowsay >/dev/null 2>&1 || { echo >&2 "cowsay is not installed. Aborting."; exit 1; }
cowsay "Hello, world!"
❗ What should I do? [use arrow keys or initials to navigate]
> [I] Invoke [C] Copy to clipboard [Q] Ask a question [A] Abort
Executing ...
cowsay is not installed. Aborting.
please-cli on fix-case in ☸ hetzner (prev-main) on ☁️ (us-east-1) took 6s
❌1 ❯ please fix the issue
💡 Command:
echo 'I do not know. Please rephrase your question.'
Would be great if please at least knew that past commands that were issued to it, so it could iterate on them. in this case, if it knows the output was "cowsay is not installed" it should have suggested "pipx -g install cowsay" or something like that.
A stretch goal could be to decide based on exit code whether to rerun please automatically on the output and suggest a fix.
Of course all of these features would be opt in - no need to force these on existing users. But I would love being able to quickly iterate. With gpt-4o this could work really well (it's a lot faster)
It would be amazing if please-cli could be enhanced with context information.
I just did this, and please didn't know anything about the fact that its previous invokation errored:
Would be great if please at least knew that past commands that were issued to it, so it could iterate on them. in this case, if it knows the output was "cowsay is not installed" it should have suggested "pipx -g install cowsay" or something like that.
A stretch goal could be to decide based on exit code whether to rerun please automatically on the output and suggest a fix.
Of course all of these features would be opt in - no need to force these on existing users. But I would love being able to quickly iterate. With gpt-4o this could work really well (it's a lot faster)