Open AndreaPi opened 1 year ago
Hi, thank you for your feedback! I think these are all very useful enhancements.
I just implemented 3 and 4, temperature and max_tokens parameters, with the last commit ( https://github.com/marcolardera/chatgpt-cli/commit/362baded219d3000086ae7321e9fa3c661b54619 ).
1 is easy, I will work on it as soon as I have a bit of time.
2 also seems a cool feature but I need to study a bit how to render the streaming response in the console.
Great! Looking forward to the implementation of 1 and 2. Regarding this last one, I understand it's a bit more complicated, but it would really enhance usability a lot. For what it concerns rendering, since you use rich (good choice 👍) this could help
Hi,
I like your minimalistic approach a lot! But the lack of a few configurable parameters made me switch to https://github.com/j178/chatgpt. If you could add the following parameters to the yaml file:
I would be happy to switch back! Basically, this is adding the following functionalities:
--context <FILE PATH>
stream
al1lows the tokens to be sent as they become available, instead than all at once at the end of the reply. This makes quite the difference with long responses and slower models such as GPT-4max_tokens
is self-explanatory 🙂 and it also makes quite the difference when using GPT-4.temperature
set to 0 allows deterministic responses (fundamental for reproducibility. From 0< to 2, it allows increasingly more creative but also less focused.These are very simple modifications, you just need to read them from the yaml file and add them as extra parameters when posting the request. Thanks!