Closed maltfield closed 7 months ago
The difficulty could be configured such that a user must solve a proof-of-work problem that would take an average mobile phone on-average ~60 seconds to submit an account creation form.
As mentioned elsewhere, this is not all that different from the rate at which some bot accounts are being created as-is.
On my instance I was seeing 2-3 signups per minute from bot accounts, but they were doing this over the span of multiple hours which let them create a lot of accounts. On an instance like mine where there was ~5 legit users this was immediately noticeable when the amount of registered users jumped to >669 accounts from one day to the next.
And further, as mentioned, due to the distributed nature of Lemmy, spammers can leverage concurrency across instances to create even more accounts. Even if every individual server receives a relatively low number of sign-ups, the fact that the spammers can sign up on muliple servers at the same time means that they are effectively creating a lot more accounts that that even so.
However, the idea of PoW as anti-spam measure is interesting for sure.
I was thinking about how else we might leverage PoW, and I have a suggestion:
Harsh rules that will make new users unable to post cross-instance until they have been accepted by admin of local instance. (As someone else suggested in another issue.) In this world an admin should typically review multiple local posts and comments before they allow the user to post cross-instance.
Manually hand out bans to suspicious accounts for even the tiniest sign of spamminess when their accounts are new.
Create an appeal process for bans that uses PoW, but where the PoW difficulty is hours not minutes. A legitimate user can afford to spend say 6 hours of CPU time to submit the PoW to appeal the ban of his account if he really cares about having an account on the instance. For spammers this will significantly reduce the number of usable accounts that they can have.
PS: Use a PoW algo that is resistant to GPU and ASIC mining. The algorithm used by Monero comes to mind.
PS: Use a PoW algo that is resistant to GPU and ASIC mining. The algorithm used by Monero comes to mind.
It looks like crypto-loot is a service specifically designed to replace Google reCaptcha with a PoW using the monero algorithm
It looks like their software is open-source and can be self-hosted directly on the lemmy instance
Here's another that uses scrypt (which is a memory-hardened cryptographic hash function) inside WASM:
There's also mCaptcha (demo), which is another project (like Lemmy) also funded by NLnet
This is a good idea, and should be relatively easy to implement compared to other potential solutions. mCaptcha looks ideal in our case because its also written in Rust. PRs to implement that welcome.
@Raspire52 would you be interested in integrating mCaptcha into Lemmy for new account signups? Do you have rust & TypeScript experience?
@Raspire52 would you be interested in integrating mCaptcha into Lemmy for new account signups? Do you have rust & TypeScript experience?
Hey, I have actually been following this conversation separately after our previous one and noticed you commented. Good to see the pros and cons argument.
Being honest, I have no experience with Rust, so I wouldn't want people to rely on me if time is of importance. Types script / Java script I have experience in, should be fine
That said, I have quite a lot of time on my hands at the moment so I can definitely give rust a go and attempt to put this in place.
Python is what I know well.
Hope this helps at least somewhat. I'd be happy to try.
@Raspire52 please give it a try. And please post an update with your progress by Friday.
I think time is the thing that most of us don't have :)
As mentioned elsewhere, this is not all that different from the rate at which some bot accounts are being created as-is.
Does mCaptcha actually have a solution for this?
Quoting from the mCaptcha Readme:
no delay when under moderate load to 2s when under attack; PoW difficulty is variable
2s delay is nothing. I mean, even with a single threaded bot, a 2s delay means 30 new accounts per minute (30*60*24=43200
new accounts per day). And I guess most bots aren't just running on a single thread (or even on a single machine).
Does mCaptcha actually have a solution for this?
Yes of course. The delay is adjustable. Your quote literally says "PoW difficulty is variable". Please read the OP of this issue.
It'll take a bot roughly 0s to solve a graphical CAPTCHA but mCAPTCHA can't be cheated.
I think it's important to mention that there is a threshold of some bot operators at which they will give-up and entirely stop trying (they'll instead target another website that's easier). Implementing this will certainly eliminate some bot herders entirely.
And there are also some bot operators that will persist no matter what. There is no system that will stop them. Implementing this will slow them down; that's mathematically guaranteed. The rate at which it slows them is entirely configurable to the instance administrator.
I guess most bots aren't just running on a single thread (or even on a single machine).
Memory-hard cryptographic hash functions mitigate the risk of threat actors with access to parallel computing resources
And this is also why ctsc recommended to use the Monero hash algorigthm. Fortunately, all these problems have already been solved; we just need to integrate an existing solution into in Lemmy.
0.18.0 should be delayed IMO until something like this is implemented, given all the bot accounts currently being made.
I agree with @misterslime with the amount of instances that are being made and the spam accounts as well. Lemmy could restrict interactions by those new accounts for a specified period of time until the mods are sure that they're not spam bots. (source)https://the-federation.info/platform/73
@Raspire52 were you able to start implementing mCaptcha in Lemmy?
I ask because I don't see any repos on your account yet, and it would be good if we could see your progress. I'm sure there's many here who can help if you're stuck.
Spent last couple days where I could looking at Rust, have just forked the repo whilst I attempt without pull requests, ill report back tomorrow if this is seriously feasible for me to do, as previously mentioned I am new to Rust and I am currently working out how Lemmy fits together.
Just so I am clear and there are no false pretences on my part; whilst I have time on my hands, because right now I am between jobs, I don't know how long this will take me to do and as I said in my last comment, people shouldn't rely on me if time is important here.
I really do not want to hold up a release, if another good Samaritan knows Rust already and can spend time on this issue, then please go ahead. Just mention it here, im keeping up with the comments. This is all a learning curve for me personally, but nothing like baptism by fire I suppose.
Ill come back tomorrow evening (its 9.23pm here in the UK) confirming what I have done so far. Ill reach out in the meantime if I have any questions about the existing code.
Having issues setting up mCaptcha containers on the default config - not sure if its my environment, tried to debug and using Python 3.9
Raised issue https://github.com/mCaptcha/mCaptcha/issues/89 over on their main repo, waiting on a response
Also I found this in their closed issues, https://github.com/mCaptcha/mCaptcha/issues/37
Tor might be a DoS attack vector, even today. Correct me if wrong.
@Raspire52 - I'm pretty handy with docker so I popped over and saw a few OS level things you might be running into in your other issue. I responded to you in that thread.
I'll be looking to dive into this work as well this weekend. In the meantime, I'm happy to offer my Linux knowledge to get you past the setup process.
At this point I am a bit confused as to the implementation details - this solution appears to be meant as an addition in a docker-compose stack as an app all it's own. We could implement this similarly in the prod docker-compose file, but it would require more technical know how to operate, and likely server access to manage.
Ideally this would be built directly into lemmy and use the same DB (though a different table) and would still require the new inclusion of a Redis server for full functionality.
I wonder if the maintainers would accept a "bolt on" solution in the docker-compose file in the meantime while deeper integration is worked through?
@Raspire52 - I'm pretty handy with docker so I popped over and saw a few OS level things you might be running into in your other issue. I responded to you in that thread.
I'll be looking to dive into this work as well this weekend. In the meantime, I'm happy to offer my Linux knowledge to get you past the setup process.
At this point I am a bit confused as to the implementation details - this solution appears to be meant as an addition in a docker-compose stack as an app all it's own. We could implement this similarly in the prod docker-compose file, but it would require more technical know how to operate, and likely server access to manage.
Ideally this would be built directly into lemmy and use the same DB (though a different table) and would still require the new inclusion of a Redis server for full functionality.
I wonder if the maintainers would accept a "bolt on" solution in the docker-compose file in the meantime while deeper integration is worked through?
Thanks!
I was also thinking similar after looking into mCaptcha in more depth regarding redis and packaging everything together for Lemmy. Will look at my problem with a fresh pair of eyes later.
I have posted more comments on the issue I opened over on the mCaptcha project, people can follow some further updates from me there.
Once I get it working, it should be feasible to complete this issue as long as the Lemmy maintainers / creators, are OK with packaging up Redis and mCaptcha containers with Lemmy.
I do think the Tor attack vector could be an issue, otherwise what's the point of implementing this if nefarious actors can mount a DoS attack on Lemmy instances via Tor, here is the link to that particular issue again: https://github.com/mCaptcha/mCaptcha/issues/37
To stop occuring this we would need to also implement a Tor exit node blocking list and keep it up to date. https://check.torproject.org/torbulkexitlist
I am going to continue looking at why I am having issues setting up mCaptcha (its pretty neat to learn regardless) containers on my Oracle Cloud VM, I am using a forked Fedora OS created by Oracle, so that may be the issue, but until there has been a further discussion on the points I raise here, I am not going to put much extra work in besides that. Happy to discuss further.
@Raspire52 I don't understand what you're saying about DoS attacks over Tor. DDoS attacks can happen over the Tor network or the clearnet. Using mCaptcha doesn't make this easier or harder. It is an attribute of cryptographic hashes used in HashCash that it's computationally extremely cheap to verify the solution is correct (server-side) but computationally very expensive to find the solution (client-side).
This issue is not attempting to provide DoS protection. DoS protection is unrelated to this issue.
We do not want to wholesale block Tor. I use Tor on lemmy everyday. Millions of people use tor every day; it's an important tool for countless at-risk Internet users for both privacy and security. Tor is not a problem. Signups for new accounts from a simple POST request and no rate-limiting is the problem.
Edit: it wasn't explicitly stated in the OP of this issue, but the expected implementation of this is to merely have an additional form input that includes the hashcash solution in-line with the rest of the signup form so that the PoW nonce solution is submitted along with the username, password, etc. I do not think we should overcomplicate this by having an additional interstitial page running on some highly-cached, terrabit-capable, front-end proxy server (ie CloudFlare) to provide DDoS protection in-front of the signup page. That's an interesting idea, but that's not what we're trying to achieve with this feature.
See this gif from 💥PoW! Captcha for a sketch of the expected UX (note that it's just a form input that calculates the hashcash solution in the background while the user fills-out the form inputs). Verifying the PoW solution server-side is a near-negligible computation that should not add any additional surface area for a DoS attack.
@maltfield fair point, and noted. So then a risk to using mCaptcha would be that a DoS attack can be mounted via Tor which is extremely cheap / free for the attacker to execute, not expensive.
"I can run multiple instances of mCaptcha to distribute the load but still doesn't solve the problem: the attacker will be able to burn resources for free"
The underlying spam user sign up issue can be solved though, yes. As it doesn't appear mCaptcha can be bruteforced to gain access to a Lemmy instance by this issue I bring attention to. Mainly that if an attack was to be mounted, DoS of the instance can be executed via Tor for free.
Some information I haven't been clear on either, going back to our original conversation for the issue I raised on your repo, the reason for why I wanted to start some sort of project to verify real users signing up to Lemmy was due to the increased load those spam users were causing on instances, some were griding to a halt. I actually dont think I have seen any spam posts via those users yet, but that might be down to good moderation by Lemmy community moderators. And to further add, I do appreciate the pointers here.
Let me get back on point. Ill re-read this whole issue again now too.
Can see where you mention future expansion, ill work on the Login page then we can continue the discussion further for other parts of Lemmy later.
a risk to using mCaptcha would be that a DoS attack can be mounted via Tor
Sorry, that's not a risk of using mCaptcha. That's a risk of running a website. Using or not using mCaptcha here makes no difference in regard to DoS attacks.
due to the increased load those spam users were causing on instances,
Ah, that's an interesting problem. Certainly if you implement this feature (and mCaptcha is available on all Lemmy installs), it would unlock a future feature that utilizes mCaptcha on some highly-cached fronting interstitial page (that can mitigate some small DoS attacks), but I think we should set that aside for the future..
Sorry, that's not a risk of using mCaptcha. That's a risk of running a website. Using or not using mCaptcha makes no difference in regard to DoS attacks.
Yeah, true, can't argue with that.
Understood, more updates to follow over next few days.
Once I get it working, it should be feasible to complete this issue as long as the Lemmy maintainers / creators, are OK with packaging up Redis and mCaptcha containers with Lemmy.
I wasnt aware of this. Im getting more sceptical of this, especially with comments indicating that this is essentially a fancy rate limit and not a real captcha. We already have rate limits in place, so I dont see much benefit adding two more containers to add a different type of rate limit.
And the point of a captcha is that it is (supposed to be) easy to solve for humans, but difficult to solve for computers. In this case its actually easy for computers, they just need to wait a few seconds for each account (which might still be acceptable to many attackers).
@Nutomic , while server side rate limiting is one thing, I think the asymmetric cost advantage in using approaches such as mCaptcha should be re-emphasized. While idle time could very well be a cheap commodity for a malicious actor, client side CPU time is not.
With server side rate limiting, such actors could easily spin up a number of async threads, or queued jobs, for sending HTTP requests from a shared pool of bot accounts, allowing attacker resources to efficiently schedule or sleep on requests to different instances, and scale up to however many simultaneous client connections they'd prefer to retain in parallel.
With client side PoW, attackers can no longer scale up throughput as efficiently via temporal parallelization , as the asymmetric cost advantage is no longer in their favor. Attackers would then have to scale up in compute power; incurring greater opportunity costs, if not monetary costs for hardware utilization.
Another thing I like about this approach is the accessibility factor, allowing blind users greater ease in registration. One should note that captchas have been cited by the r/blind communities concerns with reddit, and a motivating factor to migrate to Lemmy:
After everything that’s happened, we wanted more control over our own api and interface. What if we just set up a community somewhere else and then they added an inaccessible captcha? This way we can run our instance to suit ourselves, but still interact with communities in other instances. https://rblind.com/comment/37558
We already have rate limits in place, so I dont see much benefit adding two more containers to add a different type of rate limit.
Lemmy's current rate-limiting is IP-based. IP-based rate limiting is ineffective and bad for privacy.
First, it's easy for an attacker to bypass IP-based rate-limiting simply by changing their IP address. As instance admins complaining about the new wave of bot accounts have said, the account creations came from multiple distinct IP addresses. IP-rate limiting would be ineffective at preventing signups in this case.
A Hashcash solution rate-limits the individual's session, not the IP address. This is smarter and works better.
IP-based rate-limiting is harmful to at-risk people who need to use privacy technologies like VPNs and Tor. Lemmy's currently rate-limiting is "dumb" and cannot differentiate between different users with the same IP address.
Hashcash adds rate-limiting by session. This is more respectful of user's right to privacy as it doesn't penalize them for using privacy tools.
In this case its actually easy for computers
A graphical CAPTCHA is solvable in about 0 seconds. As you've pointed-out before, it is easier for computers to solve than humans. The hard reality is that there is no way to actually stop bots from creating accounts in a way that won't create false-positive that harm humans. The name of the game is slowing them down. We have to accept that.
they just need to wait a few seconds for each account
My original recommendation was to target 60 seconds, but this can be set to 5 minutes or 6 hours if an instance admin wants.
A graphical CAPTCHA is designed to slow down human account creation to, what, 10 seconds? But bots can solve graphical CAPTCHAs faster than humans in ~0 seconds. The fact is that there is no cheating on hashcash. And selection of a good (memory-hard) cryptographic hash algorithm prevents powerful computers from being able to find the solution faster by throwing more compute resources at it.
I wasnt aware of this [adding two more containers]
I don't advocate for mCAPTCHA specifically. If there's another hashcash implementation that would integrate better into Lemmy, that's fine. I think we should just pick whatever is easiest to integrate.
I've linked to 6 potential solutions, and you put your weight behind mCAPTCHA because it's rust-based. Here's another 8 that are rust-based
Had success with standing up mCaptcha on my local-homeserver with 16GB of Memory. My Oracle Cloud VM is purposefully lower spec but may be too low. I have read up on Redis and how much resource it requires, their official docs are saying multiple GBs but on my home-server its only a few MB. I will load test this out once iv got a working proof-of-concept set up.
Is there a recommended server spec for hosting a Lemmy instance with 10,000 registered users, with ~2000 active users logged in at the same time? What's the server spec of Lemmy.ml or Lemmy.World?
Important to get mCaptcha working on my Oracle cloud VM, as this would be indicative of what Lemmy admins would experience in the real world, and also with a similar instance host server specification.
I believe, right now, many Lemmy instances are running on fairly light-weight specced servers. If any Lemmy instance admins are reading this, it would be good to understand what specifications your instance hosts are so please reply with this information.
However now that I have it working on my local-homeserver, I can split my work and partially move on.
1) To learn how to integrate this with a login page, at first glance it appears this would be integrated via mCaptcha's API. 2) I also need gain an understanding of the DBs for both Lemmy and mCaptcha, I have skimmed that part up till now.
@Nutomic I saw you comment, i'm happy to get a working proof-of-concept of this together and before I send in any pull requests to the main project, Ill share my forked POC instance with you to see what you think and we can go from there.
I am mainly performing this work as a bit of fun for myself, and trying to help the community at the same time.
A bit off-topic, and I'm not an instance admin but from what I gather a lot of the smaller instances run on VMs or Digital Ocean droplets (or Hetzner, OVH, Linode, etc), and the bigger instances solved load issues by just switching to a dedicated machine. Probably the leading expert on this is the admin of lemmy.world. See @rudd@lemmy.world post history for more info
Creator of mCaptcha advises Redis is only for serving multiple websites from a single instance, for this issue here it looks like we can run with it embedded instead, so that's interesting..
Using lemmy_server host memory as a stateful storage across multiple requests is problematic - it won't work well on instances using horizontal scaling.
Lemmy already has a couple of issues with horizontal scaling, my humble opinion is that we should work on reducing these issues, not adding to them.
By the way, I would like to voice my support for adding a Redis instance to Lemmy deployments - it would also open up the door for some nice caching opportunities in the future (which go beyond just HTTP response caching).
I'm chipping away at this, I have an understanding now on how to embed the POW captcha into the login page, and I'll try to attempt it this week on my localhost webserver. I have been spending time reading the mCaptcha manual from their website rather that the GitHub docs.
Initially I'll stick with Redis due to the previous comment but will probably create a separate proof-of-concept for an embedded version after that. I will also need to load test my POC instances once this is complete. I can get mCaptcha set up without issue on my localhost web server but I am having issues still within my clouds VMs.
Towards the end of this week I am visiting a friend in London so it will be next week when I can provide an update that is more substantial.
@Raspire52 great to hear you're making progress, thank you!
Please make sure that you are pushing your commits to github daily so that we can follow-along as you learn and build.
You can also just require linking to reddit/social media OR key base, also many a variable donation amount to activate account.
Update: It, umm, looks like Raspire52 (now "ghost") deleted their account and didn't leave us any code with their progress before they left.
So this is now up-for-grabs if anyone with rust/typescript experience wants to take a stab at integrating hashcash (eg mCaptcha or pow-captcha) into lemmy
If someone wants to do this, it'd be best to make a rust captcha-type crate outside of lemmy, that we could then include.
There doesnt seem enough interest to implement this, and anyway its not clear if it would work in practice.
Lemmy user for 1+ years, in favor hashing as an anti-spam mechanism. This could also be used to mine Bitcoin and/or XMR and fund lemmy development, maybe with a portion also sent to the instance being registered on as well.
Worth noting that Tor has implemented a similar hash function as an anti-DDoS measure https://blog.torproject.org/introducing-proof-of-work-defense-for-onion-services/
This is a terrible idea. It won't do a single thing to prevent DoS attacks, and it certainly won't stop any spam.
TOR's proof of work works very differently. When the network is congested, such as when a DoS attack is occurring, a PoW system increases the computational difficulty of accessing the network. All clients must then perform the work to gain any access to the network at all, slowing down the attackers and relieving network pressure.
The only way this could possibly prevent a DoS against Lemmy is if the attack involved creating as many user accounts as possible, and there are much easier ways to perform a DoS attack on a web service that a PoW "captcha" can't possibly prevent, which is precisely why no one does a DoS attack that way.
Overall, this is basically just a fancy rate limit where the amount of time users wait depends on the performance of their CPUs. It's less than useless.
Requirements
Is your proposal related to a problem?
The spam-prevention methods available on Lemmy are lacking, and I'm afraid that instance admins may resort to anti-privacy solutions (eg cloudflare anti-bot fronting), which would be horrible for the privacy of users.
It's important that Lemmy provides admins with built-in anti-spam protections that don't prevent at-risk users who need to use privacy tools (eg firefox privacy addons or Tor Browser) from being able to participate in discussions.
Describe the solution you'd like.
This issue is a feature request to allow instance admins to enable hashcash to rate-limit new accounts (as a means to minimize spam).
Hashash is a proven cryptographic system to rate-limit using a proof-of-work algorithm. The whitepaper was published in 2002, and there are already some typescript implementations available on GitHub
Describe alternatives you've considered.
Many sites will just auto-ban user accounts if their fingerprint cannot be uniquely identified (eg prolific Akamai "Access Denied" error messages across the Internet). This is bad for marginalized folks who need anonymity.
Graphical CAPTCHAs are an option, but they often harm user privacy (eg Google ReCAPTCHA) and sophisticated actors are still able to bypass them.
There is no way to bypass hashcash other than solve the proof-of-work problem, which is a cryptographcially well-proven system of probability, with adjustable difficulty.
Additional context
The suggested implementation would allow lemmy instance admins to turn on hashcash for users signing-up for a new account. The difficulty could be configured such that a user must solve a proof-of-work problem that would take an average mobile phone on-average ~60 seconds to submit an account creation form.
This feature could also be added for any other writes to Lemmy, including submitting links and comments. The instance admin could require new accounts to solve a proof-of-work with every write-operation if they have less than some amount of positive karma and/or until their account is > X days old.