Open rmrbytes opened 1 month ago
Hello!
The fact that your input is PDF or text does not have an impact on splitting.
Try to play with the parameters of DocumentSplitter
.
This works, for example:
from haystack import Document
from haystack.components.preprocessors import DocumentSplitter
doc = Document(content="Hello World.\n\nMy name is John Doe.\n\nI live in Berlin", meta={"name": "doc1"})
splitter = DocumentSplitter(split_length=1, split_by="passage")
docs = splitter.run([doc])
print(docs)
# I get 3 documents
Thanks @anakin87 for your response. I did mention above that it works with txt files as confirmed by your example. My issue is with PDFs (using pypdf). I will revisit by creating a simpler pdf with distinct two line returns.
I also ran into the issue, but I think it might just be the fact that the PDF format is a bit of a mess when people create documents. Some of the PDFs I have can correctly be split but some can't.
Hmm. This was a simple text based PDF created via google docs. I thought it would be a "simple" pdf :-). For now, am testing using txt and md files till I get a handle on this. Thanks @lbux for sharing.
I'll give it another shot tomorrow just to be sure. I do remember the same document I used was having no issues with splitting by words or sentences, but it did fail when I did passage. I wonder if it's failing to read the \n\n
somehow. Or maybe the converter is not properly handling it.
For what it is worth, I did experiment with 3 pdfs and found that the entire document was a single chunk always.
You are correct. The issue persists regardless of what converter is used (Miner
or PyPDF
). I tried several files, and they all report 1. However, the issue also occurs if we use the NLTKDocumentSplitter
. I'd have to dig a bit deeper to determine what the root cause is.
The issue is with the converters. PyPDF is a bit worse in handling PDFs when no parameters are added compared to PDFMiner. However, both implementations are not configured to infer paragraphs breaks based on spacing or layout analysis. When we see something we consider a paragraph, it is stored as "\n" as opposed to "\n\n" which the splitter expects.
Here is a screenshot of a PDF
And this is how PDFMiner converts it: This is a test document. Here we have a sentence.\nThis is the start of a new paragraph.\nThis is not the start of a new paragraph.\nThe number should be 3.\n
This is how PyPDF converts it: "Thisisatest document. Herewehaveasentence.\nThisisthestart of anewparagraph.Thisisnot thestart of anewparagraph.\nThenumbershouldbe3.
@anakin87 Would love if someone from the team could take a deeper look into it. If my findings are right, many customers could be incorrectly converting their documents.
@sjrl Have you by any chance encountered this problem in the past?
@anakin87 We are also using PyPDF a lot but we either typically use:
so we don't have much experience with splitting by passage so we may have not noticed the missing newlines.
It seems PDFMiner provides more easily understandable options to customize. Specifically, if you mess with the line_margin
parameter (I think make it smaller) then it might preserve the extra newlines, but it's difficult to tell just looking at their documentation.
I think the main issue (with PDFMiner and probably PyPDF) is in the implementation of the converter:
def _converter(self, extractor) -> Document:
"""
Extracts text from PDF pages then convert the text into Documents
:param extractor:
Python generator that yields PDF pages.
:returns:
PDF text converted to Haystack Document
"""
pages = []
for page in extractor:
text = ""
for container in page:
# Keep text only
if isinstance(container, LTTextContainer):
text += container.get_text()
pages.append(text)
# Add a page delimiter
concat = "\f".join(pages)
return Document(content=concat)
Our downstream task expects "\n\n" but the current implementation does not add the delineators. The current implementation seems to concatenate all the text of LTTextContainer expecting it to already have the delineators which is not the case. If we explicitly add them like so:
def _converter(self, extractor) -> Document:
"""
Extracts text from PDF pages then convert the text into Documents
:param extractor:
Python generator that yields PDF pages.
:returns:
PDF text converted to Haystack Document
"""
pages = []
for page in extractor:
text = ""
for container in page:
# Keep text only
if isinstance(container, LTTextContainer):
container_text = container.get_text().strip()
if text:
text += "\n\n"
text += container_text
pages.append(text.strip())
# Add a page delimiter
concat = "\f".join(pages)
return Document(content=concat)
It is true that line_margin
is important and that is what actually controls our LTTextContainer object, but if we just concatenate, it has no impact.
With these changes, I was able to get the correct, expected output:
This is a test document. Here we have a sentence.\n\n This is the start of a new paragraph.\nThis is not the start of a new paragraph.\n\n The number should be 3.
Further testing would probably have to be done, and I stripped some of the text to make it easier to debug (which should actually be done in the cleaner); however, I am confident this is the root cause.
@lbux thanks for debugging this issue!
We should probably investigate better, but your proposal sounds reasonable.
Haystack's PyPDF's conversion method is this:
def _default_convert(self, reader: "PdfReader") -> Document:
text = "\f".join(page.extract_text() for page in reader.pages)
return Document(content=text)
This also has the same limitation where we don't add the \n\n
. Complicated logic can be added to manually insert the delineators and I added some basic heuristics to test it myself (if previous line ends with period and new line starts with uppercase letter). It worked fine, but it was ugly.
I then looked more into extract_text()
, and it seems we are using the legacy extractor. They have an experimental version that we can enable by adding "extraction_mode="layout"
as a parameter in extract_text()
and it works out of the box for my test without any additional changes needed... Just another data point for possible solutions!
@lbux thanks for debugging this issue!
We should probably investigate better, but your proposal sounds reasonable.
Of course! My current project is explicitly dependent on retrieving exact paragraphs, so I had to fix this one way or another haha. Hoping for a permanent solution when time allows.
Describe the bug When using Document Splitter with pdf and
split_type="passage"
, the result is always one document. This is using pypdf.Expected behavior The understanding I have is that it splits based on at least two line breaks
\n\n
Additional context When I tested using plain text it seems to be splitting correctly
To Reproduce
dir = '...' files = [ {"filename": "rules.pdf", "meta": {"split_by" : "passage", "split_length":1, "split_overlap":0, "split_threshold":0}}, {"filename": "rules.txt", "meta": {"split_by" : "passage", "split_length":1, "split_overlap":0, "split_threshold":0}} ] for file in files:
set the filepath
System: