I think this is an API limitation but I'm wondering is there a way to maintain the state/context for each API call, like how the actualy chatgpt can keep each session so that we don't have to repeat what was prompted previously? Basically maintaining historical context.
I notice that for each function call there is a model parameter; I'm assuming we can just input something like gpt4 or whatever model and it should work?
Yes, this is due to how OpenAI allows interaction with the model via the API. If you chat to ChatGPT today and tell it a few facts, then it won't "remember" these if you start a new session a week later. You can maintain a "within-session" context though the same way you can ask ChatGPT about previous answers in the session. This is something I've not yet had the time to test and implement.
In principle: yes. GPT-4 (and it's model parameter counterpart) may come with some additional intricacies (as was the case for the GPT-3.5 to ChatGPT transition). GPT-4 support will come soon.
Hey thanks for this package. I have 2 questions:
thanks.