As the user I want to increase the token limit per API call so that I can get longer response from GPT.
Examples 🌈
Making this 8k
Motivation 🔦
Current token is limited to cfg.fast_token_limit which is about 4k. This is hardwired in the current version, main.py
# Send message to AI, get response
with Spinner("Thinking... "):
assistant_reply = chat.chat_with_ai(
prompt,
user_input,
full_message_history,
memory,
cfg.fast_token_limit) # TODO: This hardcodes the model to use GPT3.5. Make this an argument
Duplicates
Summary 💡
As the user I want to increase the token limit per API call so that I can get longer response from GPT.
Examples 🌈
Making this 8k
Motivation 🔦
Current token is limited to
cfg.fast_token_limit
which is about 4k. This is hardwired in the current version,main.py