Closed guruk closed 1 year ago
I was having a similar frustration while testing the addition of the code scanner, but ended up finding some prompting approaches that would reduce the noise.
Here’s an example I used to speed up responses and reduce token usage during testing:
Solve FizzBuzz for 0 through 17. Don't explain the code or tell me your process or how FizzBuzz works. Just generate the code so we can execute it.
It can depend on which model you’re using, though. GPT-4 too this and performed as expected.
You can see the output in a recording in this PR comment
Hey there, @guruk!
Did you have any luck with the suggestions?
Do we still need this Issue?
In convert_to_openai_messages.py it tells the LLM that the user can't see the output, maybe that's why it does that.
I'm going to close this one as stale, but feel free to reopen if you feel like the issue still exists and no suggestions have helped.
Is your feature request related to a problem? Please describe.
ls
Would you like to run this code? (y/n). y
ls
index.html < thanks here i got my answer.
The file in the "http" directory is: < here it explains me again what the terminal responded. • index.html (possible a file for web service)
--
especially if there are many files in a folder, it can be annoying while its also helpful in times when it does explain.
i would like to disable this double listing and than explaining of the files in the folder.
i tried " show me the terminal response, but do not explain it" ... doesnt work. after ls it shows me the terminal output, whats enough. than it starts to explain each file.
Describe the solution you'd like
that it understand 'dont explain a terminal feedback'
Describe alternatives you've considered
No response
Additional context
No response