infiniflow / ragflow

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
https://ragflow.io
Apache License 2.0
17.3k stars 1.76k forks source link

[Question]: about Integrating LlamaParse #2008

Open tanchangde opened 4 weeks ago

tanchangde commented 4 weeks ago

Describe your problem

Hey Team,

Hope you're all doing great!

I've been thinking about our current setup and noticed that parsing on our local deployment is pretty slow with just CPU. I was wondering if we've ever considered integrating LlamaParse? I've read that it’s supposed to be much faster and could really help speed things up for us.

Would love to hear your thoughts on this.

Thanks a bunch!

KevinHuSh commented 4 weeks ago

I guess that LlamaParse calls remote services, so it's not literally local parsing.

tanchangde commented 4 weeks ago

Indeed, that's the case. My more fundamental need is to be able to leverage third-party computational power for processing, rather than being confined to local hardware resources.

yingfeng commented 3 weeks ago

For local deployment, you could try to enlarge the number of task executors to accelearte the document parsing. By default there's only single worker which is slow for parsing tasks.