InAnYan / jabref

Graphical Java application for managing BibTeX and biblatex (.bib) databases
https://devdocs.jabref.org
MIT License
0 stars 0 forks source link

Find the best parameters for Q&A with papers #17

Open InAnYan opened 1 month ago

InAnYan commented 1 month ago

It's a long-term issue.

Parameters for tuning:

ThiloteE commented 1 month ago

Document splitter chunk size depends on the embedding model's capabilities. Look at it's config.json at huggingface.

koppor commented 1 month ago

The context is a bit missing for me. I assume the context is that we use some LLM model (online service?) not capable of summarization. And that we do not use existing services for summarization (such as Microsoft Kernel Memory).

Kernel Memory is currently scheduled for week 9.

ThiloteE commented 1 month ago

For the first part of the AI project, the integration with an online service,

we either a) We will create an index. then we will use an embedding model to create the embeddings from the indexed data. Then post-process the embeddings. The final outcome + user prompt, we will send to an online service. b) We send the whole pdf to an online service.

The embedding models capabilities are only relevant, if we choose option "a".

Obviously implementing option "a" will be a necessity to feature local models.