johbar / text-extraction-service

A simple Golang service for extracting textual content from documents
GNU General Public License v3.0
1 stars 1 forks source link

Search ideas #2

Closed gedw99 closed 2 months ago

gedw99 commented 2 months ago

Hey @johbar

This is really great !!

I use NATS and this looks very powerful.

https://github.com/dagu-org/dagu is pretty nice way to run it maybe. Still playing with this.

johbar commented 2 months ago

Hey @gedw99,

glad to hear you find this useful. I put quite some effort into it. I would love to know what your use case is. Mine is indexing a few hundred thousand documents in Apache Solr as fast as possible. Not sure though if a GitHub issue is the best place for a conversation.

I haven't known Dagu before. Not sure if it is of any use to me.

gedw99 commented 2 months ago

Hey @johbar

My use case is where there are PDF's and Docs in large silos and where you want to extract knowledge out of them.

I work for the EU on Open Science, and Scientists have all these PDF'S with all this stuff in them. Just making that searchable is one thing, but also you can do other things too like some AI on it.

As far as Search indexing, there is Zinc which is pure golang. I have used it and it's pretty good. But there are good rust ones too. I try to stay pure goalng as much as possible.

https://github.com/zincsearch/zincsearch

johbar commented 2 months ago

Hey @gedw99,

thanks for sharing. TES is intended to be a microservice that's agnostic about what clients want to do with the content it returns. So whether to ingest text in Solr, OpenSearch or ZincSearch really doesn't matter to TES (like it doesn't to Tika). For this reason I will close this issue. Nothing to do here. (No intention to stop our conversation though. Feel free to go on commenting!)

One remark about scientific PDFs: Parsing tables and figures might be something you need in this domain. That really isn't easy. There are some tools in the Python universe that might be useful though. See TabulaPDF, too.

gedw99 commented 2 months ago

thanks @johbar

Yep close it :)

extracting tabular data from PDF's is a really hard problem that I am not going to try for.

I am thinking about training the OCR on certain archetypes. For example certain paper / PDF docs that are used often, can be trained for. It's pretty similar to Web scraping, where you need to tell the scraper where the paths are, but in this case the X and Y that tesseract gives back.

I wil close this.