crazyfactory / php-cf-data

GNU General Public License v3.0
0 stars 0 forks source link

Concurrency Issues #2

Closed dacgray closed 7 years ago

dacgray commented 8 years ago

In reference to erp_rpc.php line 896 & line 923.

Multiple threads can cause update and insert conflicts on tables with implicit unique composite keys.

Any chance we can account for this in the class?

What's white and fluffy?

A cloud.

What's blue and fluffy?

Blue fluff.

wmathes commented 8 years ago

We have a couple of different scenarios where concurrency issues might occur. We may need to make sure that even in tables without non-autoincrementing unique keys we can easily generate queries which will ensure no duplicated entries.

On first thought this should only be a real problem for Insert statements, though i can imagine some quirky scenarios in which Update and Remove could cause errors as well.

For inserts we may have to distinguish between a single insert and a bulk insert.

1) Single Insert

The INSERT IGNORE and MySql's REPLACE INTO statements are only usable when non-incrementing unique keys are present.

So for our scenario we could instead do something like this:

INSERT INTO table (field1, field2, field3 [...]) SELECT * FROM (SELECT value1, value2, value3 [...]) AS tmp WHERE NOT EXISTS ( SELECT true FROM table WHERE checkField1 = value1 AND checkField2 = value2 [...] ) LIMIT 1;

Using a subselect within the atomic operation should solve our concurrency issues here.

2) Bulk Insert

Normally we would create one big statement to insert all items at once. This is favorable, but ignores the fact, that this may already create duplicate entries and only works properly with INSERT IGNORE and REPLACE INTO.

If we bulk insert entries and have guarantee no duplicate entries via the SQL query, we probably have to loop the insertion using the query style from 1), though i sincerely hope there is a more elegant solution.

wmathes commented 7 years ago

We probably won't ever fix this in favor of a real ORM