simonw / llm-gpt4all

Plugin for LLM adding support for the GPT4All collection of models
Apache License 2.0
218 stars 20 forks source link

Switch to using model.chat_session() for prompt template support (blocked) #35

Open simonw opened 6 months ago

simonw commented 6 months ago

This is the cause of:

The recommended way of using GPT4All such that it manages prompt templates correctly is this:

from gpt4all import GPT4All
model = GPT4All("Phi-3-mini-4k-instruct.Q4_0.gguf")

with model.chat_session():
    print(model.generate("2 fun names for a pelican"))
    print(model.generate("2 more"))

with model.chat_session():
    print("-- should have reset --")
    print(model.generate("2 more"))

Problem: there isn't currently a way to instantiate a new chat_session() with existing logged messages - which LLM needs, because it stores state in between llm and llm -c calls in a SQLite database.

I filed an issue about thish ere: