The code was committing the request because if that failed (mostly
because the request was already known in the database) then we could
skip parsing all the events, which freed the worker earlier to process
another request.
However, it also meant that a request could be inserted, but an
unhandled exception later on would cause the rest of the transaction to
be aborted and the request to be pushed back into Redis.
At that point, trying to replay the request wouldn't help because the
request would already have been stored, and so it would just get
skipped, losing its associated events.
Clearly this is unacceptable, and that was a bad choice of optimization.
Fortunately, this never actually happened in production.
This commit fixes that, so the events are always parsed and the request
is only committed at the end, along with the events.
The code was committing the request because if that failed (mostly because the request was already known in the database) then we could skip parsing all the events, which freed the worker earlier to process another request.
However, it also meant that a request could be inserted, but an unhandled exception later on would cause the rest of the transaction to be aborted and the request to be pushed back into Redis.
At that point, trying to replay the request wouldn't help because the request would already have been stored, and so it would just get skipped, losing its associated events.
Clearly this is unacceptable, and that was a bad choice of optimization.
Fortunately, this never actually happened in production.
This commit fixes that, so the events are always parsed and the request is only committed at the end, along with the events.