4site-interactive-studios / en-wishlist

A wishlist of feature enhancements and bugs with Engaging Networks
2 stars 0 forks source link

SPAM / SCAM submission preventative measures #13

Open bryancasler opened 3 years ago

bryancasler commented 3 years ago

Hey Jenn, one more thing because you don't have enough on your plate. This came up a lot last year, and again this week when my partner who is a RAN client was talking about their SPAM/SCAM issues. Anyways that conversation put a fire under me again, and now that EN finally has someone in your role I'm making the pitch again for what can be done to improve the situation.

At the beginning of 2019, EN had no SPAM prevention other than suggesting a "honeypot field" that hasn't tripped up bots in decades. After the bots got real aggressive with fraudulent transactions across multiple EN client accounts, EN finally rolled out Captcha. But it was done in a way that I think was poor at the time and continues to largely leave the problem unresolved.

Every client I have who accepts donations has been hit by SPAM/SCAM bots on Engaging Networks. The bots typically submit to the same form over and over and over. Hundreds to thousands of times. Literally testing a credit card starting with a CVV of 001 and then trying 002 and stepping through them until it finds the right CVV. The payment processor bears responsibility here, but EN can do a lot to head it off at the pass. And honestly, it was really frustrating last year when EN's initial response was "that's not our problem, talk to your payment processor". Since then it's mostly been "we implemented X solution, isn't that good enough.. ohh it's still a problem, talk to your payment processor". For reference, a solution like Threat Metrix can be $1,500mo and was by far one of the worst vendor interactions I've ever had. If the client is on Stripe the service is more affordable but still, I think a lot could be done before even needing to look at payment processor-based solutions.

On Captcha, the current set up is a hot mess. The implementation uses Captcha v2 and requires the client to set up their configurations on a per-page basis. If enabled for all supporters on a page, you'll see a captcha before the submit button. If enabled for supporters in X high-risk countries, the form will not have a captcha. Then on submission, if the IP address of the submitter is detected in the restricted country the page will server-side error out and reload with a message telling the user they need to fill out the captcha that wasn't there before. But it's there now and they have to fill it out. And we're seeing lots of SPAM/SCAM submissions from US IP addresses so being able to select "high risk" countries isn't really helpful and is more placebo.

And not all the bots are submitting through the forms like we traditionally think of forms (going to a page, filling it out, etc..). Instead, they're posting right to the form endpoint similar to an API call after figuring out what form fields or form field values result in a successful submission. Anything that depends on a client-side solution is going to fail.

I think there are a few things that could be done to dramatically reduce this issue:

  1. Velocity limits. The idea here is to slow things down for IP addresses, or emails, that keep attempting form submissions. The pattern is always that it's the same IP address or the same email being spammed. Each failed form submission adds a little bit of time before the next submission will be accepted. The time added scales exponentially with each failed attempt with the curve focused on allowing 99% of typical user submission patterns. Cloudflare does something similar to prevent/slow down DDOS attacks https://imgur.com/a/JzkVlRd

  2. Captcha Firewall. In addition to slowing bots down, to limit how much bad they can do. There should be a 100% Captcha check for anyone who crosses an early velocity milestone. So not only does the time between submissions keep scaling, but the user is also shown an interstitial page with a Captcha requiring them to submit it to continue.

  3. These logs/preventative measures should be across all client accounts, and not tallied or limited to the individual client accounts because these bots walk through the pages ID's since they're sequential. Example below:

  4. Captcha v3: Really this should be on by default for all clients and any account identified with a high probability of being a bot immediately gets sent to the Captcha Firewall. The v3 of Captcha is a server-side solution that returns a number from 0 to 1 about the probability the user is not a human. It is then up to the service implementing the solution to decide what to do with that info. Even better, also record this number on the donation transaction record.

  5. Small Gift Flagging: The bots are typically trying to make the smallest gift possible so the transaction goes un-noticed by the victim until the card CVV is figured out, or the card is verified working. Then a large true purchase will be made. This means small gifts (under $2) which are extremely uncommon for real donors, could immediately be flagged and require the submitter to go through the Captcha Firewall to complete their gift.

REF: https://mail.google.com/mail/u/0/#search/jenn%40engagingnetworks.net+bot+catpcha/KtbxLrjCKQClkLcqnwmzqzSznJFKKmBpNB

bryancasler commented 3 years ago

Currently, interested parties in this conversation are:

bryancasler commented 3 years ago

Fernando also had some other ideas when posed with the following question

I need some help understanding how to word a question. On engaging networks a lot of the bot submissions don’t seem to be through the browser they’re “a form post directly to the form endpoint”? Does that sound right?

I’m asking because I want to know if there is a way they could make it so a form can only be posted to from a certain domain (e.g. the page itself)

I suspect a vast majority of their clients do not use their API, so just having every page with the ability to openly post to it seems like a weak point that could be hardened somehow (I hope)

Fernando responded with

The suggestion you want to give if what you have in mind is “making sure the page 1 was LOADED before sending a POST to page 2” is implementing a CSRF token. How does that work? It is a totally backend and impossible to fake solution: When you (new session) get to the donation page for the first time, the backend will generate a CSRF token and do 2 things with it: 1 - Add it to a hidden field on the donation form. 2 - Store the value on the Session (readable only from the backend). When you submit your donation, the script that’s going to make the transaction compare the CSRF hidden field value with the value you have stored on session. If it’s the same, you could not make that shit up, so that means you’re for real and the donation is valid. If it’s not the same, reject the request. Also, the CSRF value is removed from the Session right away, so that means you can’t submit the same donation form twice. Usually when you’re making a request via API, they should ask for a “Bearer” auth token. So you get the best of both worlds: If you have a auth token via API, that means you don’t need a CSRF token. If you don’t have that, you can use the CSRF to protect your forms…

bryancasler commented 3 years ago

One more idea is to have "out of the box" support for Cloudflare. Their built-in anti-bot / velocity limiting would be a fantastic solution that hopefully wouldn't create much work for EN. And might also have the added benefit of better page load times / page performance with their caching and optimization.