EndPointCorp / end-point-blog

End Point Dev blog
https://www.endpointdev.com/blog/
17 stars 63 forks source link

Comments for Parallel Inventory Access using PostgreSQL #81

Open phinjensen opened 6 years ago

phinjensen commented 6 years ago

Comments for https://www.endpointdev.com/blog/2008/12/parallel-inventory-access-using/ By Mark Johnson

To enter a comment:

  1. Log in to GitHub
  2. Leave a comment on this issue.
phinjensen commented 6 years ago
original author: Uqbar
date: 2008-12-15T08:52:00-05:00

Maybe I missed the actual thing. But in my implementation the inventory is nothing but a list of "inventory updates". To make an update I simply INSERT a new row for each variation. To un-make it I make another INSERT with reversed sign. To get the inventory level I "SELECT sum(...)". Then every while I consolidate the updates by replacing the rows with a single line. And possibly back the old lines up for history. It's a mirror of the "double entry accounting", also becasue inventory updates is actually inventory accounting. On an oldish 1.4 MHz centrino laptop (4500 RPMs disk, 512 MB RAM) I get the stock level in about 500 ms over 2M+ rows. The updates show 5- msec for insertions. I come with this implementation exactly to avoid synchronisation problems. And it works ... for me.

phinjensen commented 6 years ago
original author: Jon Jensen
date: 2008-12-19T19:08:00-05:00

That's a reasonable way to do it too, though you'll likely need to implement an archive table at some point as your table grows.

You also still would need to "claim" inventory as Mark describes, to deal with simultaneous batch inventory allocations, wouldn't you?

One system we work with that is like what Mark referred to has an inventory table with over 10 million rows, and that's without storing the complete change history, which would easily balloon the size when considering updates to move inventory from one location to another, changing status, etc.

Getting the stock level needs to take a lot less than 500 ms for this operation, even under heavy multi-user concurrency.

But it's nice that you avoid any need to VACUUM! And if it works for you on old hardware, you can of course just upgrade for the foreseeable future if needed. :)