Microscope tries to prevent duplicate URLs by checking the URL of a newly created post against the URLs already submitted to the data base. This technique is not fault-proof:
We also need to check modifications of existing URLs. See issue #330. Even if you try to check this in some cases of modifications, you can easily forget one and still get duplicates. This is messy.
If two separate users submit the same URL at the same time, a race condition could occur: The uniqueness checks for both posts may succeed before any of them is added to the data base --- allowing two posts with the same URL to enter the data base.
It seems to me that the correct approach is to Create a Unique MongoDB Index on the server and to just handle the exceptions thrown by invalid operations. I haven't tried this myself, though.
Microscope tries to prevent duplicate URLs by checking the URL of a newly created post against the URLs already submitted to the data base. This technique is not fault-proof:
It seems to me that the correct approach is to Create a Unique MongoDB Index on the server and to just handle the exceptions thrown by invalid operations. I haven't tried this myself, though.