JabRef / jabref

Graphical Java application for managing BibTeX and biblatex (.bib) databases
https://devdocs.jabref.org
MIT License
3.66k stars 2.59k forks source link

Send embeddings to GPT4All's Local API Server #12114

Closed ThiloteE closed 3 weeks ago

ThiloteE commented 1 month ago

Follow up to https://github.com/JabRef/jabref/issues/11870 and https://github.com/JabRef/jabref/pull/12078. Sub-issue of https://github.com/JabRef/jabref/issues/11872

Setup:

  1. Download and install GPT4All. Go to GPT4All settings and enable "Local API Server".
  2. Download a large language model and configure it both in GPT4All and JabRef. An outdated example is depicted at https://github.com/JabRef/jabref/issues/11870#issue-2558945662. Nowadays it is possible to choose GPT4All as AI provider at "File > Preferences > AI". Also, for testing, I recommend Replete-LLM-V2.5-Qwen-1.5b, since it is super small and fast, but it uses a different prompt template syntax than the phi-3 models.

Description of problem:

Chatting already works with GPT4All, but the embeddings of the pdf documents that are attached to JabRef's entries are seemingly not sent to GPT4All.

Image Image

Hypotheses about the root of the problem:

Additional info:

ThiloteE commented 1 month ago

@InAnYan

InAnYan commented 1 month ago

JabRef sends "embeddings" in user message. Thus hypothesis 1 coudln't be applied. However, thank you for mentioning, it's strange that GPT4All have those issues with system message customizations... It could (will) affect the output.

For hypothesis 3, every LLM API is stateless (REST - representable state transfer.., smth like that). Anyways JabRef handles the state.

@ThiloteE, could you run JabRef in debug mode and look at the logs. There should be a log from Gpt4AllModel when you send message. Can you see document pieces there (I call them "paper excerpts" in the templates PR)?

If I remember correctly, you just need to pass an argument --debug to JabRef (in Gradle it's probably: run --args='--debug'.

ThiloteE commented 1 month ago

@InAnYan Here are some logs: logs for embeddings GPT4All.txt

We can see that entry metadata is added to the systemmessage, not the user message.

InAnYan commented 1 month ago

@ThiloteE thanks. I see, system message is there.

Because there is "answer using this information", it means there probably was a search in documents.

Does chatting work with other models?

ThiloteE commented 1 month ago

No, other models exhibit the same behaviour with GPT4All.

Results of testing today:

✅ OpenAI (ChatGPTT-4o-mini):

✅ Ollama (via OpenAI AP)I:

❌ GPT4All:

ThiloteE commented 1 month ago

✅ llama.cpp (via OpenAI API):

ThiloteE commented 3 weeks ago

Real Problem:

The embeddings are not created, because of issue https://github.com/JabRef/jabref/issues/12169 (Melting-pot issue https://github.com/JabRef/jabref-issue-melting-pot/issues/537)

Solutions:

Explanation about what happened to me:

My hypothesis about what happened: Since I had multiple CUDAs installed on my system and I had not set the PATH and System Environment Variables, the embedding model was not functioning and only the LLM was fully functional, which made it seem like embeddings were not sent to GPT4All, while in reality, no embeddings had ever been created in the first place. I confirmed LLMs being functional, while testing local API servers like GPT4All, Ollama or llama.cpp, as reported in issue https://github.com/JabRef/jabref/issues/12114

I also had lots of x86 Microsoft Visual C++ Redistributables installed, which are not needed on my x64 system and that also might have caused some conflicts, but the main issue was the path issue, which caused the embeddings model to not function.

Links and comments that helped me find the answers: