Closed philpem closed 1 week ago
Point 1 about Reporting is a good one. In that vein, I think it would be better to improve generic anti-abuse measures (more types of rate limits and quotas, automatic hiding given enough reports, etc.). In the context of Weasyl, CSAM is just one type of shock material and I don’t think it’s worth creating specific integrations for.
Earlier today, another furry art site was subjected to an attack which led to large amounts of child sexual abuse material being uploaded to the website.
@Soatok has made some suggestions to prevent this from happening, some of which may be useful to us: https://gist.github.com/soatok/1eb76ea6e484cde13d50a3e224df732d
To enhance PhotoDNA, we could enhance our existing server-side deduplication (which hashes all uploaded content). If PhotoDNA returns a positive result for a given hash, the hash could be blocked server-side, and any account uploading it could be suspended while the upload is reviewed (to make sure there hasn't been a hash collision, or the account compromised).
Risk factor here is obviously that if we have a bug, it could lead to accounts being flagged erroneously -- ideally we want this to fail safe, and log failures for investigation.