Closed m0ppers closed 3 years ago
You are totally right!
And I guess nobody answered yet, because nobody knows about a best practice to handle concurrency in Rails?
I assume we need some kind of semaphore when doing transactions on an account. I would love to see a PR for this.
Thanks for pointing this out.
I'm not entirely sure how to solve this.
Rails knows two kinds of locking:
Each entity gets a new attribute called lock_version
. Rails then increments it on every save.
On saving, it is compared to the known one. If it doesn't match, the operation aborts with a ActiveRecord:StaleObjectError
.
This is relatively simple to implement and makes sure that succeeding operations produce correct data.
In your case, this will result in failed requests that you would have to catch.
Or, we could catch this exception and try again, possibly leading to a timeout.
This uses real database-level locks.
It doesn't affect the schema, but we'd need to change all accesses.
I think, we'd probably want to lock optimistically, but I'm not sure whether concurrent requests should error out or hang until they're ready.
hmmm I don't really have much of an idea about rails but both solution sound like they are difficult to implement? isn't it possible to just solve this using SQL?
the only thing that needs to happen is that instead what ActiveRecord is doing right now:
SELECT bla from users WHERE id=xxx;
[...] # time goes by...potentially other "threads" update users.money
UPDATE users set money = 5.0 WHERE id=xxx; # money computed from original select MINUS drinks
it should be changed to:
SELECT bla from users WHERE id=xxx;
[...] # time goes by...potentially other "threads" update users.money
UPDATE users SET money=money-12.0 WHERE id=xxx; # 12.0 being the sum of all drinks.
this would be using row level locking from the db itself and should produce consistent results in the DB? not sure how to realize this in a ruby of rails context :S
Sadly, seems to be not really applicable to Rails: Modifications are not executed directly on the database but on cached in-memory objects which are then written back on .save!
. And I'm not really keen on writing raw SQL. (.increment
or .decrement
is just a shortcut for +=
and -=
.)
I've played a bit with both optimistic and pessimistic locking and I now think that pessimistic locking might be better. :roll_eyes: On Postgres, this solves our concurrency bugs with the slight downside that some concurrent requests may take longer. On SQLite, this also produces correct results, but some concurrent requests may error out after hanging for a bit. I tried modifying the timeouts but this did not seem to have the desired effect.
I have not been able to write a test (for Rails) to verify whether this does work (or not). Any ideas? (I do have a simple client using HTTP though.)
We are currently working on a new kiosk-only frontend for mete.
This is what it currently looks like:
so we have a cart which might come in handy when you are fetching several drinks at once for a group of friends.
So far the API we are using is
/api/v1/users/${userId}/buy.json?drink=${drink.id}
which is completely sufficient for the moment (lets hope nobody buys 100 drinks at the same time :D)unfortunately however when buying multiple drinks at once mete will mess up. the kiosk is currently submitting the requests in parallel...however mete doesn't handle any concurrency here. I bought drinks worth 15 EUR but my balance was only reduced by 9.50 EUR :money_with_wings:
For the moment I will just serialize requests in the frontend but IMHO mete should handle this :D