Hi all, I'm unable to find any snippet related to the usage of LamaCpp and ConversationChain integrated with Chainlit and I'm a bit lost at this point:
* I can see the LLM output on Chainlit but:
…
**Expected Behavior**
I am comparing the performance of two executables: llama.cpp (current version) and the default gpt4all executable (which uses a previous version of llama.cpp). I am using the …