metachris / pdfx

Extract text, metadata and references (pdf, url, doi, arxiv) from PDF. Optionally download all referenced PDFs.
http://www.metachris.com/pdfx
Apache License 2.0
1.05k stars 115 forks source link

Recursive URL extraction from PDFs - feature request #54

Open LostAccount opened 2 years ago

LostAccount commented 2 years ago

Hi

I use pdfx -v path_to_pdf_file to gather URLs from a PDF. This is great on its own.

I would love to see pdfx expand to allow for URL extraction across a directory tree - the ability to extract URLs recursively across a directory, skipping files that are not PDFs as it goes along.

Right now I use find /path/to/folder/ -type f -name '*.pdf' -exec pdfx -v {} \; > foo.txt

This works well and someone else more skilled than I helped me with the above command but I wonder if a recursive type of feature could be integrated directly into pdfx or maybe it's redundant as unix itself has features to accomplish the same, as noted by the command above.

** I really like this tool and am using it for a personal project of mine that I will share freely once it becomes voluminous enough. Basically it's a filetype miner/download that pulls specific filetypes from the waybackmachine - a digital archeological tool of sorts. I use old books and magazine from archive.org as sources for URLs. The URLs are used to query the waybackmachine downloader to download file types.

Thanks for this really easy to use and powerful tool!