rwynn / monstache

a go daemon that syncs MongoDB to Elasticsearch in realtime. you know, for search.
https://rwynn.github.io/monstache-site/
MIT License
1.28k stars 181 forks source link

meta collection alternative #44

Closed benan789 closed 6 years ago

benan789 commented 6 years ago

Does the meta collection serve any other purpose other than getting the routing info? If not, wouldn't it be better to just query elasticsearch for the routing info?

rwynn commented 6 years ago

When you customize routing you cannot do a get without the routing info. Do you need to support deletes in mongo propagating to ES? If not then you don’t need meta. Inserts and updates are fine cause your JS sets the routing. On a delete all we have from mongo is the mongo Id which doesn’t give us the routing.

rwynn commented 6 years ago

See https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-routing-field.html#_making_a_routing_value_required

benan789 commented 6 years ago

What about ids search? It's probably not as fast as get but shouldn't be that much slower.

rwynn commented 6 years ago

Why do you think meta is an issue? Do you see errors?

benan789 commented 6 years ago

Are the meta upserts to mongo bulked? Not sure if that's a bottleneck, but syncing is very slow, I have yet to fully sync a db of 10mil docs without it breaking, i think the fix you did last night helped, it was able to sync to 4mil whereas before it could only do 2. Also it takes a lot of space on the db.

rwynn commented 6 years ago

Got you. Definitely could be bulked. But if the indexing count going up slowly?

benan789 commented 6 years ago

Ya i think it gets slower as the count goes higher. It's at 2.7 right now and I started syncing like 6 hrs ago.

rwynn commented 6 years ago

Can you try with direct-read-limit really high? Less queries. Read up on the direct-* options. Also I noticed from you comment yesterday about the error, the direct read query errored with a timeout. The query actually sorts the entire collection by _id and then seeks to the offset applies the limit. That is why I suggest a really high limit. Default is 5000 I think. That’s still 2000 queries and as it gets higher it has to seek past more documents so gets slower.

rwynn commented 6 years ago

Also did you up the thread pool on the ES side?
https://rwynn.github.io/monstache-site/start/ thread_pool: bulk: queue_size: 200

And consider setting the refresh interval to -1?

benan789 commented 6 years ago

Are you using skip?

The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return results. As the offset (e.g. pageNumber above) increases, cursor.skip() will become slower and more CPU intensive. With larger collections, cursor.skip() may become IO bound.

Consider using range-based pagination for these kinds of tasks. That is, query for a range of objects, using logic within the application to determine the pagination rather than the database itself. This approach features better index utilization, if you do not need to easily jump to a specific page.

$gt: id should fix this

rwynn commented 6 years ago

Skip used yes. And $gt is a good idea. I wonder if $gt would work if someone used a strange Id like an object? Query would be like { _id : $gt: {x: 1 } }. Would have to try it cause it needs to work in general case. I guess if we’re sorting by _id it must work for any value of _id.

rwynn commented 6 years ago

I think using the range selector instead of skip is a huge performance gain! I’ll fix it and publish a new release on Monday. Thanks for your help!

rwynn commented 6 years ago

@benan789 git it another try with the latest release when you get a chance. I'm seeing collections with millions of documents getting synced pretty quickly now.

benan789 commented 6 years ago

Much better! Thank you for fixing it so fast!