Closed c0c0n3 closed 3 years ago
Stale issue message
given we're going to have a work queue implementation soon, we should reconsider this in light of that. also related: #242. Plus, for some backends like Crate we might be able to do something smart when we insert the data, like not losing a whole batch because of just a couple of rows that can't be inserted, see these comments to #481
pr #501 provides a queue mechanism to better handle heavy load
Describe the bug
While hammering QuantumLeap with an ungodly amount of concurrent
POST
s to thenotify
endpoint, about 4% of the posted entities didn't make it to the DB.To Reproduce
Edit
notify-load-test.sh
to comment out thedocker-compose down
statement, then run the scriptThe script will try pumping 10,000 entities into QuantumLeap concurrently and as quickly as it can. When the script is done count how many rows are in the Crate entity table. If you're unlucky all 10,000 entities will have made it there, so you might have to run the script multiple times. When you're done, don't forget to clean up after yourself
Expected behavior
All 10,000 entities should always be (eventually) inserted into the DB backend.
Additional context
This issue cropped up while benchmarking QuantumLeap, details over here: