Closed jonallured closed 7 years ago
Interesting! Indexing jobs are kicked off in an after_save
hook, but since we're in a postgres context here, perhaps we should move that to after_commit to ensure the changes have been committed.
The warning comment in that document seems to confirm this:
Both #save and #destroy come wrapped in a transaction that ensures that whatever you do in
validations or callbacks will happen under its protected cover. So you can use validations to check
for values that the transaction depends on or you can raise exceptions in the callbacks to rollback,
including after_* callbacks.
As a consequence changes to the database are not seen outside your connection until the operation
is complete. For example, if you try to update the index of a search engine in after_save the indexer
won't see the updated record. The after_commit callback is the only one that is triggered once the
update is committed.
We've never encountered this issue with after_save
in a Mongo context, not entirely sure why right now.
We can update the callback here: https://github.com/cavvia/bearden/blob/master/app/models/concerns/searchable.rb#L15
I can look into updating estella's inline callbacks upstream too, but they are disabled here in any case.
Excellent, then let's not merge this PR as it is - instead, I'll make the change to use after_commit
and update the PR.
OK, this is green and I think it solves the problem better. 👍
looks good! :+1:
The return here only works if we use the
find_by
form, otherwise,find
will just raise if it can't find the record and you'll never even hit the early return on the next line.I know this because I was having issues with these jobs while testing #324 locally here on my laptop. This change fixed those jobs from failing, but might expose a different problem. I have a theory that these search indexing jobs are enqueued and then worked too quickly for Postgres. I think I was getting that exception about the record not being found because it was a race condition with the database insert!
If that's true, then this "fix" would actually break our search indexing and what we really need to do is find a way to introduce some delay between insertion and indexing.
Or maybe I'm missing something, but wanted to get master back into a good state while we tinker. Open to other ideas, of course!!