deepset-ai / haystack

AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
https://haystack.deepset.ai
Apache License 2.0
17.9k stars 1.93k forks source link

Document Splitter always returns 1 document for split_type="passage" in pdfs #8491

Open rmrbytes opened 1 month ago

rmrbytes commented 1 month ago

Describe the bug When using Document Splitter with pdf and split_type="passage", the result is always one document. This is using pypdf.

Expected behavior The understanding I have is that it splits based on at least two line breaks \n\n

Additional context When I tested using plain text it seems to be splitting correctly

To Reproduce

dir = '...' files = [ {"filename": "rules.pdf", "meta": {"split_by" : "passage", "split_length":1, "split_overlap":0, "split_threshold":0}}, {"filename": "rules.txt", "meta": {"split_by" : "passage", "split_length":1, "split_overlap":0, "split_threshold":0}} ] for file in files:

set the filepath

file_path = Path(dir) / file["filename"]
router_res = file_type_router.run(sources=[file_path])
txt_docs = []
if 'text/plain' in router_res:
    txt_docs = text_file_converter.run(sources=router_res['text/plain'])
elif 'application/pdf' in router_res:
    txt_docs = pdf_converter.run(sources=router_res['application/pdf'])
elif 'text/markdown' in router_res:
    txt_docs = markdown_converter.run(sources=router_res['text/markdown'])
document_splitter = DocumentSplitter(
    split_by=file['meta']['split_by'], 
    split_length=file['meta']['split_length'], 
    split_overlap=file['meta']['split_overlap'], 
    split_threshold=file['meta']['split_threshold']
)
splitter_res = document_splitter.run([txt_docs['documents'][0]])
print(len(splitter_res['documents']))

System:

anakin87 commented 1 month ago

Hello!

The fact that your input is PDF or text does not have an impact on splitting. Try to play with the parameters of DocumentSplitter.

This works, for example:

from haystack import Document
from haystack.components.preprocessors import DocumentSplitter

doc = Document(content="Hello World.\n\nMy name is John Doe.\n\nI live in Berlin", meta={"name": "doc1"})
splitter = DocumentSplitter(split_length=1, split_by="passage")

docs = splitter.run([doc])
print(docs)
# I get 3 documents
rmrbytes commented 1 month ago

Thanks @anakin87 for your response. I did mention above that it works with txt files as confirmed by your example. My issue is with PDFs (using pypdf). I will revisit by creating a simpler pdf with distinct two line returns.

lbux commented 2 weeks ago

I also ran into the issue, but I think it might just be the fact that the PDF format is a bit of a mess when people create documents. Some of the PDFs I have can correctly be split but some can't.

rmrbytes commented 2 weeks ago

Hmm. This was a simple text based PDF created via google docs. I thought it would be a "simple" pdf :-). For now, am testing using txt and md files till I get a handle on this. Thanks @lbux for sharing.

lbux commented 2 weeks ago

I'll give it another shot tomorrow just to be sure. I do remember the same document I used was having no issues with splitting by words or sentences, but it did fail when I did passage. I wonder if it's failing to read the \n\n somehow. Or maybe the converter is not properly handling it.

rmrbytes commented 2 weeks ago

For what it is worth, I did experiment with 3 pdfs and found that the entire document was a single chunk always.

lbux commented 2 weeks ago

You are correct. The issue persists regardless of what converter is used (Miner or PyPDF). I tried several files, and they all report 1. However, the issue also occurs if we use the NLTKDocumentSplitter. I'd have to dig a bit deeper to determine what the root cause is.

lbux commented 2 weeks ago

The issue is with the converters. PyPDF is a bit worse in handling PDFs when no parameters are added compared to PDFMiner. However, both implementations are not configured to infer paragraphs breaks based on spacing or layout analysis. When we see something we consider a paragraph, it is stored as "\n" as opposed to "\n\n" which the splitter expects.

Here is a screenshot of a PDF image

And this is how PDFMiner converts it: This is a test document. Here we have a sentence.\nThis is the start of a new paragraph.\nThis is not the start of a new paragraph.\nThe number should be 3.\n

This is how PyPDF converts it: "Thisisatest document. Herewehaveasentence.\nThisisthestart of anewparagraph.Thisisnot thestart of anewparagraph.\nThenumbershouldbe3.

@anakin87 Would love if someone from the team could take a deeper look into it. If my findings are right, many customers could be incorrectly converting their documents.

anakin87 commented 2 weeks ago

@sjrl Have you by any chance encountered this problem in the past?

sjrl commented 2 weeks ago

@anakin87 We are also using PyPDF a lot but we either typically use:

so we don't have much experience with splitting by passage so we may have not noticed the missing newlines.

sjrl commented 2 weeks ago

It seems PDFMiner provides more easily understandable options to customize. Specifically, if you mess with the line_margin parameter (I think make it smaller) then it might preserve the extra newlines, but it's difficult to tell just looking at their documentation.

lbux commented 2 weeks ago

I think the main issue (with PDFMiner and probably PyPDF) is in the implementation of the converter:

def _converter(self, extractor) -> Document:
        """
        Extracts text from PDF pages then convert the text into Documents

        :param extractor:
            Python generator that yields PDF pages.

        :returns:
            PDF text converted to Haystack Document
        """
        pages = []
        for page in extractor:
            text = ""
            for container in page:
                # Keep text only
                if isinstance(container, LTTextContainer):
                    text += container.get_text()
            pages.append(text)

        # Add a page delimiter
        concat = "\f".join(pages)

        return Document(content=concat)

Our downstream task expects "\n\n" but the current implementation does not add the delineators. The current implementation seems to concatenate all the text of LTTextContainer expecting it to already have the delineators which is not the case. If we explicitly add them like so:

def _converter(self, extractor) -> Document:
        """
        Extracts text from PDF pages then convert the text into Documents

        :param extractor:
            Python generator that yields PDF pages.

        :returns:
            PDF text converted to Haystack Document
        """
        pages = []
        for page in extractor:
            text = ""
            for container in page:
                # Keep text only
                if isinstance(container, LTTextContainer):
                    container_text = container.get_text().strip()
                    if text:
                        text += "\n\n"
                    text += container_text
            pages.append(text.strip())

        # Add a page delimiter
        concat = "\f".join(pages)

        return Document(content=concat)

It is true that line_margin is important and that is what actually controls our LTTextContainer object, but if we just concatenate, it has no impact.

With these changes, I was able to get the correct, expected output:

This is a test document. Here we have a sentence.\n\n This is the start of a new paragraph.\nThis is not the start of a new paragraph.\n\n The number should be 3.

Further testing would probably have to be done, and I stripped some of the text to make it easier to debug (which should actually be done in the cleaner); however, I am confident this is the root cause.

anakin87 commented 2 weeks ago

@lbux thanks for debugging this issue!

We should probably investigate better, but your proposal sounds reasonable.

lbux commented 2 weeks ago

Haystack's PyPDF's conversion method is this:

def _default_convert(self, reader: "PdfReader") -> Document:
        text = "\f".join(page.extract_text() for page in reader.pages)
        return Document(content=text)

This also has the same limitation where we don't add the \n\n. Complicated logic can be added to manually insert the delineators and I added some basic heuristics to test it myself (if previous line ends with period and new line starts with uppercase letter). It worked fine, but it was ugly.

I then looked more into extract_text(), and it seems we are using the legacy extractor. They have an experimental version that we can enable by adding "extraction_mode="layout" as a parameter in extract_text() and it works out of the box for my test without any additional changes needed... Just another data point for possible solutions!

lbux commented 2 weeks ago

@lbux thanks for debugging this issue!

We should probably investigate better, but your proposal sounds reasonable.

Of course! My current project is explicitly dependent on retrieving exact paragraphs, so I had to fix this one way or another haha. Hoping for a permanent solution when time allows.