Azure-Samples / azure-search-openai-demo

A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
https://azure.microsoft.com/products/search
MIT License
6.36k stars 4.25k forks source link

optimizations for image embedding with vision #2133

Open cforce opened 2 weeks ago

cforce commented 2 weeks ago

Could you help clarify why this warning is necessary? It clutters the console, and the purpose isn't entirely clear.

Additionally, I've noticed that when using the Vision API, all pages from documents (like PDFs) are stored as PNGs, even if there isn’t a single image on the page. Is there a reason for this? Couldn’t we apply the Vision API selectively, using it only for pages containing images? This would avoid the extra processing effort and the token usage involved in storing simple text pages as images. Converting text to images for Vision seems to double the runtime, increase blob storage, and create unnecessary index chunks for text-only pages.

Unless I’ve misunderstood, it would make sense to use Vision solely on image-containing pages, achieving the best of both approaches without doubling token consumption. Could you provide some insight into this approach? I’m still exploring the code and would appreciate a better understanding.

There is also some dead code, where i am not sure if its leftover or not finished feature. "has_image_embeddings" is not used at https://github.com/Azure-Samples/azure-search-openai-demo/blob/009d5e13ba5f9f14bcca684d7a47e1d28fd9023e/app/backend/prepdocslib/textsplitter.py#L97 so its also not needed at https://github.com/Azure-Samples/azure-search-openai-demo/blob/009d5e13ba5f9f14bcca684d7a47e1d28fd9023e/app/backend/prepdocs.py#L160

I wonder if its ok to enable vision in the app but selectively run prepdocs only for sources with heavy image content is ok. Will the app be able to deal with mixed embedding? If prepdocs would decide on the fly per page if its image heavy or not this could improve speed and costs and only use vision where useful.

cforce commented 2 weeks ago

A very interesting approach is that notebook https://github.com/douglasware/ElumenotionSite/blob/master/Projects/PdfToMarkdownAndQaPairs/v4omni-image-plus-docIntelOcr.ipynb?short_path=78bb846 The notebook provides a streamlined approach for processing OCR data from images. The workflow involves converting each page of a PDF into an OCR-generated markdown file, enriched with image descriptions and MermaidJS diagrams through GPT-4o. A structured prompt directs GPT-4o to transcribe the document's text and recreate tables while inserting descriptive text for figures. These descriptions may include additional diagrams generated with valid MermaidJS syntax. MermaidJS diagram guidelines ensure correct syntax, emphasizing the use of alphanumeric characters and underscores for node IDs, and requiring special characters in labels to be enclosed in double quotes. The notebook process, which costs around $0.03 and takes under 10 seconds per page, requires initial processing of images via PdfToPageImages.ipynb and DocIntelligencePipeline.ipynb to generate markdown content. In Azure AI Search, indexing the markdown rather than the original PDF improves efficiency. The new index reduces storage while retaining much of the document’s critical content, resulting in fewer chunks than indexing the entire PDF.