AnswerDotAI / byaldi

Use late-interaction multi-modal models such as ColPali in just a few lines of code.
Apache License 2.0
621 stars 58 forks source link

Alternative to using in-memory collection #45

Open carstenj-eksponent opened 1 month ago

carstenj-eksponent commented 1 month ago

When I load from an index with model = RAGMultiModalModel.from_index(index_path=index_name) then I get the following message

You are using in-memory collection. This means every image is stored in memory. You might want to rethink this if you have a large collection!

I am not sure what the alternative is to using in-memory collection. I browsed the source files without finding anything.

Is there a way to use a database or any other persistent storage?

Thanks, Carsten

bclavie commented 2 weeks ago

This is poorly phrased, thank you for flagging. in-memory collection here doesn't refer to the embeddings (there are DB providers, such as Vespa, and byaldi itself will eventually integrated better storage mechanisms if it doesn't become mainstream quickly enough among DB providers), but to the base64 encoded images. When not using in-memory collection, you'd need to store them somewhere else yourself, and use the mapping (i.e. retrieve page X of document Y) to be able to send it to an LLM. With in-memory collection enabled (which is done at indexing time), you don't need to do so, as we save the base64 version of images within the index. This is costly in terms of memory/storage, but also pretty convenient.

I'll update the doc and message to make this clearer.

fvisconti commented 11 hours ago

Hi, love byaldi :)

Can you add an example of usage for the in-memory collection?

Reading the above answer I think I've over complicating my demo, I do this:

def get_document_pages(index_fetch_res: list):
    index_mapping_to_files = RAG.get_doc_ids_to_file_names()
    for doc_id, file_name in index_mapping_to_files.items():
        index_mapping_to_files[doc_id] = Path(file_name).stem

    pages = []
    for res in index_fetch_res:
        doc_id = res["doc_id"]
        page_num = res["page_num"]
        page_path = Path("images-financial") / Path(index_mapping_to_files[doc_id]) / Path(f"page_{page_num}.png")
        pages.append(page_path)

    return pages

img_pages = get_document_pages(results)

Where results are the documents returned by the RAG.search().

Then, in order for the images to be passed to the vlm, I have:

def get_answer(prompt:str, images:List[str], top_1=True):
    if top_1:
        imgs_data = [Image.open(images[0])]
    else:
        imgs_data = [Image.open(image) for image in images]
    # model is instantiated elsewhere
    response = model.generate_content([*imgs_data, prompt])

    return response.text

def answer_query(prompt, images, top_1=True):
    return f"Gemini Response\n: {get_answer(prompt, images, top_1=top_1)}"

As you can see, I have the image pages for my documents saved on the file system, and I need to go get those files and open them via the PIL library.

My next step is to save the embeddings in a vector db (and I did not understand how to do this actually), but it would be great also not to fetch image files if they are in memory as base64.

Thanks!