Open HaysianSmelt opened 4 months ago
Hi. Try increasing the size of the context.
Ok I will thanks. Hey is there a list of all the settings and how they effect the operation of the LLMs?
I think there's an issue with kv_shift() function. Im testing for example on TinyLlama, and when reaching limit, model continues the rest of the conversation with gibberish. Also tried with some other models. Easy to test this, just change context size to 100, and it starts generating this stuff right away, like:
Here are some
\\ and ....
, with
in of c
. .
,\
('
.
in2 \0 in and\\_ .\\ not\\
\\.\\ for.
,22\ \\
\\\\0\\ \\ thats\\\\\ is\\.\\ . is\\\\\\ in\\\\\ \\ \\\\'\\\\\\\\\\\\
\\.\\
\\\\0\\\\ or\\
:\\\\ & to\\
andi\\ $
, "
`
\\\\
\\\\.\\
.'
\\
\\ in a\\
. of
_\\ in\ is\\\\\\\\\\\\
\\\\
\\8.\\\
\\ \\\\
\\
2.\\
\\
0.\\\\\\\\.. .
\\\\
that1. \ to and
_
the
s
.
\\.\\(
\\,\ is. a___..:ing
\\\\ and
\\\\\\
\\
2\\\\
\\. \0..'
to
are
\\ a and
, and
1 $ .\\ is \'\\)1\\
\\,2s\\
\\\\\\
\\ \ \\.\\\\\\\\\.
\ ( a & with
.1,, .\\\\ of`\
: to
and $1
Hi. What is a the C_Limit error? Eventually I get this for all models I’ve tried after a page or two of conversation.