Closed debanjum closed 11 months ago
Great to be using built-in support for Llama V2 via GPT4All going forward! A little confused about the PR description. You said "Make offline chat model user configurable", but it's still limited to only Llama V2, unless I'm missing something. What did you mean by that?
Closes #406
You said "Make offline chat model user configurable", but it's still limited to only Llama V2, unless I'm missing something. What did you mean by that?
Updated the PR description with more details on how to use a different offline chat model. Made a few fixes to use default fallback max_prompt_size and tokenizer to make this work
a85ff94 Make offline chat model user configurable. Use
filename
of any GPT4All supported model like below:tokenizer
andmax-prompt-size
of chat model user configurable. E.g When using chat models not in this pre-defined list that support larger context window or a different tokenizer.Closes #406