Closed divramod closed 10 months ago
@divramod hey, yes, I'm running on gpt-4-1106-preview
also :slightly_smiling_face: the number is month and day publicly released (MMDD)
gpt-3.5-turbo
and gpt-3.5-turbo-16k
are aliases which currently use gpt-3.5-turbo-0613
/gpt-3.5-turbo-16k-0613
, they plan to switch them to gpt-3.5-turbo-1106
during December => the best and cheapest model fresh users can currently use is gpt-3.5-turbo-1106
with 16K token windowgpt-4*
api is limited to paying users - those who paid $1 or more for their API (or got historically access via now discontinued waiting list)gpt-4-1106-preview
there is no point in using gpt-4
which is alias to gpt-4-0613
thx for the explanations!
in my openapi dashboard i can see a list of possible text models.
could you clarify, which models we can use in general with gp.nvim?
i watched the dev day talk last week and sam altman spoke about
gpt-4 turbo
, but i can't see it in the list. he said that gpt-4 turbo would be able to handle a lot more user input tokens.could it be that
gpt-4-1106-preview
is gpt-4 turbo?do you know what the 1106 means?
edit: i just switched to
gpt-4-1106-preview
it responds ten times faster whooop