Open sfllaw opened 5 months ago
I am happy to submit a patch, if you accept contributions. I would suggest having a command-line option like --normalize-unicode=NFKC
, which should be the default. Obviously, the documentation will have to describe why you would want to pick NFC over NFKC. I think it shouldn’t offer NFD normalization, but if someone has a valid use-case, it can be easily added on later.
I am also open to other command-line flags, if you think that users learning about Unicode normalization is too much to impose on them.
@sfllaw I appreciate the suggestion but I think what will really be needed is to insert markup into the PDF that allows competent PDF readers to see what is going on - and then testing to see if it helps sufficiently.
If you want to attempt the relevant portion of the spec is below:
@jbarlow83 If I understand correctly, you are suggesting that we use ActualText
to clobber the invisible GlyphLessFont text that Tesseract produces with the NFC normalization?
That is, for the above example with the scanned 1½
, OCRmyPDF would produce something like:
/Span<</ActualText(1½)>>
BDC
11/2 Tj
EMC
Maybe I am misunderstanding your proposal, because it seems like this will depend on how the PDF reader deals with ActualText
? I have tried this using Evince, Okular, qpdfview, xpdf, and Chrome and they all don’t match 1/2
when searching, because the invisible text has been overridden.
Because of this, I can’t think of an advantage over skipping NFKC altogether and rendering the NFC version in GlyphLessFont.
Does ActualText
work as you expect in your PDF reader? Did you have a different example in mind?
Also, it looks like some PDF readers don’t handle non-trivial ActualText
correctly, but I have not investigated this deeply: https://github.com/ho-tex/accsupp/issues/2
A few key points here:
'½'.encode('pdfdoc')
to perform the conversion to bytes.When using parenthesis in a content stream the character IDs must be encoded in pdfdoc. However ½ is U+00BD which is b'\xbd'
in pdfdoc. If you encode a content stream in UTF-8, ½
would be encoded as b'\xc2\xbd'
which is not equivalent. Does the hexdump show /ActualText(... 31 BD ...)
or /ActualText(... 31 C2 BD...)
? If the latter, that would explain why the text was not recognized - it looks like '1½'
in pdfdoc.
In reference to the final point, GlyphLessFont defines itself as having a 1:1 mapping of Unicode to character ID, and then maps all character IDs to glyph ID 1, which is a blank cell. Actual text is supposed to supply an alternate list of characters IDs that are used for searching and copy-pasting, but not for rendering, such as in the example from the PDF reference manual, where hyphenation is present in the rendered text but eliminated in ActualText.
All that said, it's quite possible most PDF viewers don't respect ActualText even when it's properly encoded.
Thank you for all the details about the intricate Unicode handling in PDFs! However, I’d like to pop the stack and talk about the bigger picture.
When OCRmyPDF encounters the ocrx_word 1½
in an hOCR file, it normalizes that to 11/2
, which is a much larger number! You suggested that ActualText markup would allow competent PDF readers to see what is going on, but I don't understand how that would work better than doing NFC normalization instead of NFKC. Since OCRmyPDF already typesets invisible text, why do we need to add ActualText on top of it?
I’d really like to solve this bug in a way that you’d be happy with. Could you please help me understand your proposal?
Describe the proposed feature
HocrTransform.normalize_text
normalizes text using the NFKC[^1] compatibilty algorithm.https://github.com/ocrmypdf/OCRmyPDF/blob/6895c2d70fa03ec4d57e779110e07fd50cf4c489/src/ocrmypdf/hocrtransform/_hocr.py#L158-L161
As explained in #1272, it does this so that searching for
Bauernstube
will matchBauernſtube
in naïve PDF readers.Unfortunately, this means that copy-pasting text out of the OCRed PDF will result in the former text, which will not match the rastered image that the user sees.
If there were an option to choose between NFKC and NFC normalization forms, then the author could opt to render the text more faithfully. In my case, I was surprised that
1½
was normalized to11/2
, which is a very different number![^1]: Unicode® Standard Annex #15: UNICODE NORMALIZATION FORMS