bramses / chatgpt-md

A (nearly) seamless integration of ChatGPT into Obsidian.
MIT License
824 stars 61 forks source link

It doesn't show much at a time, so you have to keep going. Can it continue automatically and be quickly collated (excluding text for typing to continue) #27

Closed wlioi closed 1 year ago

wlioi commented 1 year ago

hello~ It doesn't show much at a time, You must enter the content and ask ChatGPT to continue . Can it continue automatically and be quickly collated (excluding text for typing to continue)

Especially in Chinese, it feels like less content can be displayed at a time than in English.

lukemt commented 1 year ago

@wlioi try to increase max_tokens in the yaml frontmatter. For example you can set it to 4000. Please comment whether this works or not

lukemt commented 1 year ago

@wlioi Do you know how to change max_tokens or do you need further assistance?

Also: Better set max_tokens to 2000, because the maximum is 4096 for the entire conversation (requests + responses) for davinci but max_tokens defines the maximum for only the response.

bramses commented 1 year ago

@wlioi yeah as @lukemt said, increasing max_tokens is a great first step! it gives the model more "room to play" in a single completion -- check out the faq for more details .

As for collating content, there would still be a simulated user message to "keep going", ChatGPT wouldn't do that on its own. ChatGPT MD is simply making an API call with the messages in your MD file and streaming the results

wlioi commented 1 year ago

I've tried different max_tokens, but it seems like this only changes the amount of content ChatGPT accepts when I pass it on

@wlioi Do you know how to change max_tokens or do you need further assistance?

Also: Better set max_tokens to 2000, because the maximum is 4096 for the entire conversation (requests + responses) for davinci but max_tokens defines the maximum for only the response.

bramses commented 1 year ago

@wlioi ChatGPT does look for optimal stopping points, some before the full length. Imagine you have a cup of a certain size, but you only need half the cup to fit the water you need.

Another (maybe better) analogy -- run on sentences. Some sentences end quickly. Others go on, and on, and on, and on, and on, and on, and on, and on, and on...

wlioi commented 1 year ago

@wlioi ChatGPT does look for optimal stopping points, some before the full length. Imagine you have a cup of a certain size, but you only need half the cup to fit the water you need.

Another (maybe better) analogy -- run on sentences. Some sentences end quickly. Others go on, and on, and on, and on, and on, and on, and on, and on, and on... @bramses I don't think so. Because answers always end in the middle of half a sentence. What I mean is this: max_tokens can only modify what I give ChatGPT, but not what ChatGPT gives me

bramses commented 1 year ago

@wlioi Hmm, that's probably a GPT thing. Try playing with the parameters: stop, frequency_penalty presence_penalty, and see if that helps: https://platform.openai.com/docs/api-reference/chat/create