rendezqueue / rendezllama

CLI for llama.cpp with various commands to guide, edit, and regenerate tokens on the fly.
ISC License
11 stars 1 forks source link

feat(option): to end confidant messages with EOS token #4

Closed grencez closed 1 year ago

grencez commented 1 year ago

When llama.cpp's main example runs in "instruct" mode, an end-of-sentence (EOS) token is placed after each chatbot response. I'm not sure if this is only useful for Alpaca models, but it seems worth trying.

Let's switch into this mode when newline is an end-of-sentence token.

grencez commented 1 year ago

This doesn't really make sense, especially given how we're doing the instruction-following format (https://github.com/rendezqueue/rendezllama/issues/5#issuecomment-1528479960).

grencez commented 1 year ago

Re-opening as a "keep EOS delimiter token" and support it in Alpaca format.

grencez commented 1 year ago

So I guess we want to keep the EOS intact all the time. And insert it when appropriate.

grencez commented 1 year ago

We don't insert an EOS when editing or reading in a rolling prompt... not sure if I want to bother.

However, we still need to document the ((sentence_terminals) "\n") option. It is documented in the assistant_alpaca example, but nowhere else.