Closed atomical closed 2 years ago
Hi Adam,
Sorry for not getting back to you sooner. I realise you've closed this issue, but if there's anything you still want to discuss, very happy for it to be re-opened.
A few thoughts:
With a large database is the best strategy when an index changes to deploy, and then run rake ts:index after the deploy finishes. Technically, only the new column should be unusable, correct? I've done some testing locally and that appears to be the case.
This sounds like a decent approach to me - though I feel your verification is more useful than mine, given you've got the large datasets to test with!
I've also been experimenting with some code that tracks individual index file changes and then only reindexes the indexes that need to be updated. But in that case we'll never merge the deltas and suffer from poor performance. It seems pointless. Plus, generating one index will update all the index definitions in the generated sphinx configuration file.
Yeah, updating single indices is not ideal if just due to the configuration change being applied entirely.
Often we've had some crashes with the indexer when running ts:index. We then have to remove the binlog and restart searchd. What's the best way to troubleshoot this? Are there log files generated anywhere?
I've seen this occasionally with Flying Sphinx customers as well. The Sphinx daemon should have two logs - one for the daemon itself, and one for queries, which are managed in Thinking Sphinx via the per-environment log
and query_log
settings respectively within config/thinking_sphinx.yml
. The daemon log file should provide a touch more context about the crashes… although, now that I'm re-reading your statement - is it the indexer that's crashing, or the daemon? I'm not so sure the daemon log is going to be so useful if it's the indexer that's crashing (and I don't believe there is a separate log file for the indexer). 🤔
Hi Pat, thanks for all your work with this library! We really appreciate it.
I feel like I've bugged you about this before so apologize for the repetition.
With a large database is the best strategy when an index changes to deploy, and then run
rake ts:index
after the deploy finishes. Technically, only the new column should be unusable, correct? I've done some testing locally and that appears to be the case.I've also been experimenting with some code that tracks individual index file changes and then only reindexes the indexes that need to be updated. But in that case we'll never merge the deltas and suffer from poor performance. It seems pointless. Plus, generating one index will update all the index definitions in the generated sphinx configuration file.
Also, Often we've had some crashes with the indexer when running
ts:index
. We then have to remove the binlog and restart searchd. What's the best way to troubleshoot this? Are there log files generated anywhere?