Open tanchangde opened 4 weeks ago
I guess that LlamaParse calls remote services, so it's not literally local parsing.
Indeed, that's the case. My more fundamental need is to be able to leverage third-party computational power for processing, rather than being confined to local hardware resources.
For local deployment, you could try to enlarge the number of task executors to accelearte the document parsing. By default there's only single worker which is slow for parsing tasks.
Describe your problem
Hey Team,
Hope you're all doing great!
I've been thinking about our current setup and noticed that parsing on our local deployment is pretty slow with just CPU. I was wondering if we've ever considered integrating LlamaParse? I've read that it’s supposed to be much faster and could really help speed things up for us.
Would love to hear your thoughts on this.
Thanks a bunch!