internetarchive / archive-pdf-tools

Fast PDF generation and compression. Deals with millions of pages daily.
https://archive-pdf-tools.readthedocs.io/en/latest/
GNU Affero General Public License v3.0
97 stars 13 forks source link

Just some other errors with the current version. I can't get the current version to work with a hocr-file coming from pdftree to get out the current searchable text from a PDF #37

Closed rmast closed 2 years ago

rmast commented 2 years ago

I now work with a hocr-file coming from pdftree to get out the current searchable text from a PDF as suggested on the bottom of this issue: https://github.com/ocropus/hocr-tools/issues/117

recode_pdf --from-imagestack './2022-01-08*.tif' --hocr-file anonymized.hocr --dpi 400 --bg-downsample 3 --mask-compression jbig2 -o 2022-01-08a.pdf Traceback (most recent call last): File "/usr/local/bin/recode_pdf", line 4, in import('pkg_resources').run_script('archive-pdf-tools==1.4.11', 'recode_pdf') File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 667, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 1463, in run_script exec(code, namespace, namespace) File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/EGG-INFO/scripts/recode_pdf", line 288, in res = recode(args.from_pdf, args.from_imagestack, args.dpi, args.hocr_file, File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/internetarchivepdf/recode.py", line 741, in recode outdoc.save(outfile, deflate=True, pretty=True) File "/usr/local/lib/python3.8/dist-packages/PyMuPDF-1.19.2-py3.8-linux-x86_64.egg/fitz/fitz.py", line 4416, in save raise ValueError("cannot save with zero pages") ValueError: cannot save with zero pages

recode_pdf --from-pdf Afbeeldingen/scantailorin/out/2022-01-08a.pdf --hocr-file anonymized.hocr --dpi 400 --bg-downsample 3 --mask-compression jbig2 -o 220108uitvoer.pdf Traceback (most recent call last): File "/usr/local/bin/recode_pdf", line 4, in import('pkg_resources').run_script('archive-pdf-tools==1.4.11', 'recode_pdf') File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 667, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 1463, in run_script exec(code, namespace, namespace) File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/EGG-INFO/scripts/recode_pdf", line 288, in res = recode(args.from_pdf, args.from_imagestack, args.dpi, args.hocr_file, File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/internetarchivepdf/recode.py", line 741, in recode outdoc.save(outfile, deflate=True, pretty=True) File "/usr/local/lib/python3.8/dist-packages/PyMuPDF-1.19.2-py3.8-linux-x86_64.egg/fitz/fitz.py", line 4416, in save raise ValueError("cannot save with zero pages") ValueError: cannot save with zero pages

Even if I leave out the hocr-file in the hope the input PDF should be already taken for the searchable text inside there's still an error: recode_pdf --from-pdf Afbeeldingen/scantailorin/out/2022-01-08a.pdf -o 220108uitvoer.pdf Traceback (most recent call last): File "/usr/local/bin/recode_pdf", line 4, in import('pkg_resources').run_script('archive-pdf-tools==1.4.11', 'recode_pdf') File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 667, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 1463, in run_script exec(code, namespace, namespace) File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/EGG-INFO/scripts/recode_pdf", line 288, in res = recode(args.from_pdf, args.from_imagestack, args.dpi, args.hocr_file, File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/internetarchivepdf/recode.py", line 628, in recode create_tess_textonly_pdf(hocr_file, tess_tmp_path, in_pdf=in_pdf, File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/internetarchivepdf/recode.py", line 110, in create_tess_textonly_pdf for idx, hocr_page in enumerate(hocr_iter): File "/usr/local/lib/python3.8/dist-packages/archive_hocr_tools-1.1.13-py3.8.egg/hocr/parse.py", line 42, in hocr_page_iterator fp.seek(0) AttributeError: 'NoneType' object has no attribute 'seek'

I anonymized the hocr by :%s/>.*<\/span>/>bla<\/span> anonymized.zip

MerlijnWajer commented 2 years ago

Thanks for the report, I'll take a look as to why that hOCR is not accepted.

Somewhat related, I've been working on my own PDF -> hOCR based on PyMuPDF text extraction (so it can stay pure Python + mupdf): https://github.com/internetarchive/archive-hocr-tools/blob/master/bin/pdf-to-hocr - not production ready though.

MerlijnWajer commented 2 years ago

It looks like it doesn't want to parse the hOCR file you shared because it does not contain the required namespace. I will see what I can do to work around this, since if I don't prefix the namespace, it will not parse documents with the namespace.

The other problem seems to be that this uses ocrx_block as opposed to ocr_par. I suppose I'll want to support both in the xpath queries, then.

MerlijnWajer commented 2 years ago

I think this is fixed in https://github.com/internetarchive/archive-hocr-tools/commit/6cdb14dbe45b7ab3f5c0c0ad50bdc2e2aef69581 - I will do a bit more testing before I make a release, though.

MerlijnWajer commented 2 years ago

Thanks for the report!

rmast commented 2 years ago

I think this is fixed in internetarchive/archive-hocr-tools@6cdb14d

@MerlijnWajer You point to a commit in the archive-hocr-tools repo for the solution of this issue in the archive-pdf-tools repo. Will archive-pdf-tools support hocr-files coming from different sources, or will you just make it read the text from an existing searchable pdf?

rmast commented 2 years ago

@MerlijnWajer I also made a hocr file via another route: djvu2hocr ~/Afbeeldingen/2022-01-08.djvu >220108.hocr 220108.zip This one gives recode_pdf --from-imagestack ~/Afbeeldingen/211115-000ga.tif --hocr-file ~/jwilk/ocrodjvu/220108.hocr --dpi 300 --bg-downsample 3 --mask-compression jbig2 -o 2022-01-08a.pdf Traceback (most recent call last): File "/usr/local/bin/recode_pdf", line 4, in import('pkg_resources').run_script('archive-pdf-tools==1.4.11', 'recode_pdf') File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 667, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 1463, in run_script exec(code, namespace, namespace) File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/EGG-INFO/scripts/recode_pdf", line 288, in res = recode(args.from_pdf, args.from_imagestack, args.dpi, args.hocr_file, File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/internetarchivepdf/recode.py", line 628, in recode create_tess_textonly_pdf(hocr_file, tess_tmp_path, in_pdf=in_pdf, File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/internetarchivepdf/recode.py", line 210, in create_tess_textonly_pdf word_data = hocr_page_to_word_data(hocr_page, font_scaler) File "/home/robert/.local/lib/python3.8/site-packages/hocr/parse.py", line 185, in hocr_page_to_word_data conf = int(X_WCONF_REGEX.search(word.attrib['title']).group(1).split()[0]) AttributeError: 'NoneType' object has no attribute 'group'

MerlijnWajer commented 2 years ago

archive-pdf-tools relies on archive-hocr-tools to parse hOCR files, so what I will do is:

  1. release new archive-hocr-tools
  2. release new archive-pdf-tools that depends on the newer archive-hocr-tools

For your other question - about keeping the text layer intact from an existing PDF is another matter, there are a few things I want to do there ultimately:

  1. I want to support just compressing images in a PDF and not touch anything else (preserve text layers) - this is not currently what --from-pdf does, it just reads one image per page and recompresses it (and it assumes you have hOCR for it).
  2. Be able to create hOCR from text layers of existing PDFs and use that to either (re)generate the text layer, but also as input for the MRC algorithm.

For this request, https://github.com/internetarchive/archive-pdf-tools/issues/28 a better issue to comment/discuss I think.

MerlijnWajer commented 2 years ago

@MerlijnWajer I also made a hocr file via another route: djvu2hocr ~/Afbeeldingen/2022-01-08.djvu >220108.hocr 220108.zip This one gives recode_pdf --from-imagestack ~/Afbeeldingen/211115-000ga.tif --hocr-file ~/jwilk/ocrodjvu/220108.hocr --dpi 300 --bg-downsample 3 --mask-compression jbig2 -o 2022-01-08a.pdf Traceback (most recent call last): File "/usr/local/bin/recode_pdf", line 4, in import('pkg_resources').run_script('archive-pdf-tools==1.4.11', 'recode_pdf') File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 667, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 1463, in run_script exec(code, namespace, namespace) File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/EGG-INFO/scripts/recode_pdf", line 288, in res = recode(args.from_pdf, args.from_imagestack, args.dpi, args.hocr_file, File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/internetarchivepdf/recode.py", line 628, in recode create_tess_textonly_pdf(hocr_file, tess_tmp_path, in_pdf=in_pdf, File "/usr/local/lib/python3.8/dist-packages/archive_pdf_tools-1.4.11-py3.8-linux-x86_64.egg/internetarchivepdf/recode.py", line 210, in create_tess_textonly_pdf word_data = hocr_page_to_word_data(hocr_page, font_scaler) File "/home/robert/.local/lib/python3.8/site-packages/hocr/parse.py", line 185, in hocr_page_to_word_data conf = int(X_WCONF_REGEX.search(word.attrib['title']).group(1).split()[0]) AttributeError: 'NoneType' object has no attribute 'group'

Ok, I'll fix that bug as well, thanks, reopening.

rmast commented 2 years ago

x_wconf gaat over de confidence. Die krijg je niet terug uit een searchable PDF.

rmast commented 2 years ago

When I comment out the confidence-line "archive-hocr-tools/hocr/parse.py" line 186 conf = 100 # int(X_WCONF_REGEX.search(word.attrib['title']).group(1).split()[0])

the conversion takes place.

The hocr-file coming from pdftotree has another scale than the hocr-file coming from djvu2hocr. The route via djvu2hocr gives a MRC-pdf with the characters on the right positions. The route via pdftotree (with dimensions that are only 400/72 = 5,556 the size of the dimensions of the other file) gives text behind only in the left uppercorner of the PDF. So the mapping of the HOCR on the images should take account of this 72dpi positional conversion in a PDF.

Both don't have the scan_res marker. pdftotree doesn't order the words right to readable lines, so the djvu-route seems more stable.

rmast commented 2 years ago

Your pdf-to-hocr does give a result that gives a readable text when copied from the PDF, but with too many line endings. During the selection the selectedtexts are a bit slanted.

rmast commented 2 years ago

The resulting pdf from recode_pdf with the options on the central readme is more than 3 times as big as the PDF resulting from DjVuSolo3.1/DjVuToy.

MerlijnWajer commented 2 years ago

(Ik geef maar ff antwoord in Engels voor de andere mensen :-) )

@rmast - sorry for the delay, I did read your messages earlier and yes, the problem is indeed that it is expecting x_wconf to be there, even though it is optional, I will fix that.

Regarding the text selection, it is the same code that Tesseract uses (more or less), but it is possible there are too many line endings added in the conversation. Optimal would be to not have to re-create the text layer, as discussed earlier.

Regarding the compressed size, if you can share the files I can look to see if something can be improved.

rmast commented 2 years ago

The files were greylevel scans of a black and white book, meant to end up only in a thresholded jbig2-image. DjVuSolo does some heuristic optimizations.

MerlijnWajer commented 2 years ago

I've released archive-hocr-tools 1.1.15 that should fix the word confidence problems as well. Please let me know if it works.

rmast commented 2 years ago

I've pulled the newest version and built and installed it. It comes to an end as it did with my dirty fix. However I get this warning for every page: Deprecation: 'getImageList' removed from class 'Page' after v1.19.0 - use 'get_images'. Deprecation: 'extractImage' removed from class 'Document' after v1.19.0 - use 'extract_image'.

MerlijnWajer commented 2 years ago

Ah, that's likely in the --from-pdf code path? I still need to give that some more attention. I'll fix that here and make sure it's fixed in the next release.

MerlijnWajer commented 2 years ago

See 07ff850ddc1cba295114717e35ebc70dcf79eb5c