Closed mountaineerbr closed 1 year ago
Perfect @mountaineerbr
Thanks for the fast response
Can I set session as the default startup option?
Can I set session as the default startup option?
This patch really may require a little bit more testing. About setting session mode as startup option, as this is just a testing patch, there is no obvious way of doing that other than entering the session
command, however you can start the script such as session_set=1 chatgpt.sh
to get that set at startup. Cheers!
Thanks that's great
that seams to work fine
Also one more thing If image: defaults to the browser (which I prefer anyway) wouldn't it be better to skip the y/n question? What else would I want to do? Sure open the created image in the default browser!
Nice work @mountaineerbr ! Thanks for the PR! Is this working perfectly? I used your approach for the Q&A and the prompt trimming in my PR, nice stuff! 👍
@se7en-x230 About the image, in my use case because I use iterm and imgcat the image is displayed in terminal by default, so I usually select no in the question unless I want to save the image. If you want to directly open the image in your browser after generating it, you can remove this code
if [[ "$TERM_PROGRAM" == "iTerm.app" ]]; then
curl -sS $image_url -o temp_image.png
imgcat temp_image.png
rm temp_image.png
else
echo "Would you like to open it? (Yes/No)"
read answer
if [ "$answer" == "Yes" ] || [ "$answer" == "yes" ] || [ "$answer" == "y" ] || [ "$answer" == "Y" ] || [ "$answer" == "ok" ]; then
open "${image_url}"
fi
fi
and replace it with xdg-open "${image_url}"
Nice work @mountaineerbr !
Hey @0xacx . The patch is really just an idea of how such a feature can be implemented.
I also get an empty prompt sometimes, it seems it has something to do with temperature settings in those cases (probably just try to set temp to 0 [from script defaults 0.7]). The Q&A format is a suggestion because there is an example in the API docs in which they use Q&A.
It seems that extra newlines or spaces at the end of input text may make the reply empty. May want to remove them https://community.openai.com/t/empty-text-in-the-response-from-the-api-after-few-calls/2067/11
... or even add one ending space: https://community.openai.com/t/api-call-return-empty-but-works-in-playground/17734/4
I have done some testing.
With the session_set=1 I have a conversation over about 20 min around 20 - 25 follow up question with long questions and spaces at the end of the question. No crashes at all.
Starting with -c option Random chashes.
Could be pure luck but should be tested further
I just merged the changes that enable chat context. I used @mountaineerbr 's approach on building the chat context in memory but also added a prompt to guide the model and enabling with a single flag when starting the script. It seemed to work consistently but let me know if you run into errors. Thanks for the help! 🙏
Well that is very weyrd as a pre-promt may trigger English as defaults language and there may be some users that prefer to use another language
In my tests the model replies to the language that you ask the question in. But even if that happens a few times I think it's a small price to pay for having a better chat experience overall, especially considering the majority of users use English. Thank you for the suggestions and help in the PR! 🙏
Add session option to follow up questions as per Issue #16 .
A new command,
session
and its togglesession!
, is added. It starts a new chat session in which previousQ & A
context is buffered and sent with new prompts.We try to respect
$MAX_TOKENS
by removing the oldest text from$session_prompt
.