Closed jwahnn closed 1 month ago
1, 2, 3: Please refer to the detailed walkthrough in our Jupyter Notebook example, which is available at this link. In this notebook, we provide step-by-step instructions on how to set up RAPTOR, including the integration of a custom model.
4: Yes, setting up custom models will preserve the methods of RAPTOR, including the RetrievalAugmentation function. This means that the core functionalities of RAPTOR, such as document retrieval and augmentation, remain intact and operational even when you integrate a custom model.
Thanks for your interest in RAPTOR, and I hope this helps! If you have any more questions, feel free to ask.
RA = RetrievalAugmentation() RA.add_documents(text)
This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (2048). Depending on the model, you may observe exceptions, performance degradation, or nothing at all. Answer: : [/MASKED]
Is this because I am only feeding in one document? I guess the question goes back to question 3.Hi, making another comment just in case the previous one got missed.
The sections "Building the tree" and "Querying from the tree" walk you through using an example document once you have initialized your RetrievalAugmentation Class. If you want to use custom models, first define the models as in the Using other Open Source Models for Summarization/QA/Embeddings
Section of the notebook and then initialize the RA class.
You can look at how RAPTOR summarizes the text and does QA in raptor/SummarizationModels.py
and raptor/QAModels.py
respectively. If you want to define a custom prompt or do it another way, you can define your own Summarization and QA Models as shown in the Using other Open Source Models for Summarization/QA/Embeddings
Section.
RAPTOR takes a single text file to build a tree. If you want to pass in multiple documents, concatenate them to a single string before passing it to RA.add_documents(text)
. We are working on better support for multiple document handling and adding to the tree.
Can you show your Custom Llama Model Class, and print out what the context being provided to the model is?
Is there a section on how to tweak the configs to use different chunk sizes, and also different open ai embeddings and models? And is there a way to visualize or get the details of the created tree?
Hi, thanks for sharing the code publicly. I have a few questions about using a custom model for RAPTOR:
Thanks in advance! Was looking forward to this work :)