laravel / framework

The Laravel Framework.
https://laravel.com
MIT License
32.21k stars 10.9k forks source link

Prevent breadth-first attacks in login throttling #28388

Closed reinink closed 4 years ago

reinink commented 5 years ago

Note, I have already discussed this with Taylor to verify that he was okay with me posting this security related "issue".

I recently contracted a security firm to do a penetration test on a Laravel 5.8 app. They had concerns with how login throttling works in Laravel. Here are their findings:

Observation

One or more password validation mechanisms were designed or configured in a way that did not offer sufficient protection to prevent password guessing attacks.

Implication

An attacker may be able to discover the password of one or more user accounts, which may result in unauthorized access to the authenticated portion of the system, or the ability to perform an account takeover.

Findings

The testing team discovered that, though there were mechanisms in place which prevented target password guessing against a single account, these were insufficient to prevent breadth-first guessing attacks. After approximately 50 guesses against a single account, the server began rate-limiting attempts against the targeted account. This mechanism was not activated when many attempts were made against different accounts, allowing breadth-first attacks.

The observed behavior was presumed to be a result of elapsed time between attempts against individual accounts being long enough to avoid triggering the identified rate-limiting mechanism, though the testing team was unable to determine exactly how this functionality worked. Breadth-first guessing attacks appeared to be possible against as few as eight accounts without causing the application to begin rejecting login attempts.

Figure 6: Over 800 login attempts were made without triggering the application’s rate-limiting mechanism. image

Additionally, when the rate-limiting functionality was triggered it did not function as intended. The server responded with an error noting the user was required to wait a given number of seconds before making more login attempts, though subsequent attempts within the time frame were still able to authenticate to the application, effectively only blocking the specific attempt which caused the error to trigger.

Figure 7: A login attempt, highlighted in blue, returned an error noting there had been too many attempts against an account and the account had been locked for 34 seconds, though another attempt made only a few seconds later, highlighted in red, was still able to authenticate using the correct credentials.

image

Recommendation

Modify the components and/or configuration of the password validation mechanism to help prevent password guessing attacks. For typical scenarios (breadth-first attacks), this requires multiple components working in concert to implement an effective solution. Limiting the rate of submissions and preventing users from choosing passwords easily guessed by attackers – particularly those that meet typical complexity requirements, such as Spring2019 – are the most effective means of preventing an attack.

The login throttling in Laravel is done on a per username (email) basis, not simply based on the user's IP address. The key is set in ThrottlesLogins@throttleKey, and is a combination of the username and IP address. What this means is that an attacker can attempt logins on different accounts without any rate limiting. They call this a "breadth-first attack".

What's maybe more concerning is that even when rate limiting has been triggered (you get a 422 with "Too many login attempts. Please try again in {x} seconds."), there is nothing stopping you from trying again with a different email address and logging in immediately if that authenticates. Even if Laravel does continue to track login attempts using a username and IP address combination, I think it would make sense to prevent subsequent login attempts until the rate limit period has been hit, regardless of what username you're using.

My immediate thought that maybe a better approach might be to simply use the IP address as the key. However, from some research I don't know if that's reliable or safe either, since it's possible to spoof an IP address. Further, if you're behind a load balancer or proxy, you have to rely on X-Forwarded-For, which is even less reliable.

Further, even if the IP address was reliable, it can be problematic for large organizations that all share the same IP address. If one person at the organization caused rate limiting, all other people there would also be forced to wait.

I don't have a solution for this yet, but wanted to post this to generate some discussion.

rmccullagh commented 5 years ago

Is there an alternative Rate Limiting libray for PHP?

browner12 commented 5 years ago

could we integrate a session based key? could help on browser users, but doesn't help with stateless situations.

Sladewill commented 5 years ago

I believe Sentinel handles this differently, already has the IP restrictions as well as global throttling. So if attempted through a DDoS attack it would throttle everyone connecting thus handling that. Probably needs the same handling in the framework itself.

reinink commented 5 years ago

Hmm, global throttling, that's interesting. The relevant Sentinel docs: https://cartalyst.com/manual/sentinel/2.0#throttle

laurencei commented 5 years ago

My immediate thought that maybe a better approach might be to simply use the IP address as the key. However, from some research I don't know if that's reliable or safe either, since it's possible to spoof an IP address. Further, if you're behind a load balancer or proxy, you have to rely on X-Forwarded-For, which is even less reliable.

Well we already use the IP + username, so using just the IP as an "overaraching" rate limiter doesnt seem to be any less reliable?

Further, even if the IP address was reliable, it can be problematic for large organizations that all share the same IP address. If one person at the organization caused rate limiting, all other people there would also be forced to wait.

Having a higher "IP" rate limiter than the local "IP + username" might solve this.

i.e. 5 attempts for "IP + username", but 25 attempts (configurable) for IP only would probably stop the breadth-first attack.

And then if a single user at large organisation forgets their password, they get locked out after 5 attempts, but the rest of the organisation (on the same IP) does not (because global limiter is not reached).

davedevelopment commented 5 years ago

Stacking the throttles would be my approach, using separate keys based on the usernames and IP addresses.

Further, even if the IP address was reliable, it can be problematic for large organizations that all share the same IP address. If one person at the organization caused rate limiting, all other people there would also be forced to wait.

There's a chance of disruption for genuine users here, but the chances are fairly small I think. You only have to throttle enough to stop someone from brute forcing passwords, which is less likely to affect someone typing their password incorrectly a few times. That said, if you have that many on one IP, might be worth whitelisting it somehow.

Tiering the throttles based on the same unique key gives you a "burst" effect. The system can handle a burst of authentication requests (e.g. when everyone logs on at 9am), but the "wider" throttles take care of any prolonged attacks that work around the "narrower" throttles.

Numbers are just examples:

<ip>|1 200 attempts per 5 minutes <ip>|2 1000 attempts per hour <ip>|3 5000 attempts per 24 hours <username>|1 10 attempts per 5 minutes <username>|2 20 attempts per hour <username>|3 50 attempts per 24 hours <username>|4 150 attempts per month

On a side note, it would be pretty awesome for the "default" settings etc of a framework to be battle hardened against a pen test like this :+1:

Caveat emptor: I am not a security researcher

reinink commented 5 years ago

@davedevelopment Dig it. Would you add a global tier, in case of DDoS attacks or IP spoofing?

davedevelopment commented 5 years ago

@davedevelopment Dig it. Would you add a global tier, in case of DDoS attacks or IP spoofing?

I'm not sure really, having read the Sentry docs, my first reaction would be that you're enabling a DDoS. Fire up enough bots running enough failed login attempts to meet the throttle and then nobody can log in. My gut feeling is that if you think there is a risk there, it should be handled elsewhere in your infrastructure.

reinink commented 5 years ago

Yeah, I see what you mean. This probably depends on the app. I'd probably rather have no users be able to login than for an attacker to be able to guess a username/password combo (as unlikely as that realistically is).

laurencei commented 5 years ago

<username>|1 10 attempts per 5 minutes

Would/should this be <username+ip> though? Because otherwise anyone can lock someone's account?

davedevelopment commented 5 years ago

Would/should this be <username+ip> though? Because otherwise anyone can lock someone's account?

Personally, I'd say no, but I guess it always depends. If someone is making a targetted effort to guess a particular person's password, I'd rather the actual person gets locked out of their account than increase the number of chances the attacker gets.

Sladewill commented 5 years ago

This probably should all be configurable to allow everyone different levels of protection based on their persoanl preferences, otherwise everyones's going to argue they want it a different way.

laurencei commented 5 years ago

If someone is making a targetted effort to guess a particular person's password, I'd rather the actual person gets locked out of their account than increase the number of chances the attacker gets.

But this makes the site subject to a different attack, where you can lock people out of accounts just by spamming the login page using known logins. The current throttle system avoids this exact issue (by using username + ip), so we wouldnt want to move away from that.

trevorgehman commented 5 years ago

Probably the simplest solution is to just add the built in ThrottleRequests middleware to the login route to handle breadth-first attacks.

By default, for an unauthenticated request, it uses <domain>|<ip>.

For a large DDOS attack, I would imagine you'd need a more robust firewall solution in any case.

imanghafoori1 commented 5 years ago

@laurencei is it possible to keep a list of trusted IPs from user login history, to alleviate the problem ? The server says : "you have used this IP before and you are using it again so no problem... you are logged in."

GertjanRoke commented 5 years ago

But what if we add a second throttling on it that checks only on the IP and has a higher limit and also make it configurable inside the auth config file for both? Maybe firing an event when the IP limit is reached so that you can hook into for notifications as they like.

We are not going to find a solution for DDOS attacks but at least with this solution you can block the "breadth-first attack" in a way.

allenjd3 commented 5 years ago

What are the downsides of a session based key? Would that use too much server resources?

donnysim commented 5 years ago

So does this mean that throttling for login is useless at the moment? Even if you're locked out, you'll still be logged in if you send the correct password? Getting too many attempts error is jus an another "Incorrect password" error?

Shkeats commented 5 years ago

@reinink It's not a complete solution but have you considered triggering a good captcha after x attempts per ip, user (or both) as part of the throttling mechanism? This effectively throttles an automated attack and is the kind of check you can apply on on a per user, per IP or global basis with only minimal inconvenience for genuine users. Any attackers who wanted to proceed once the captcha in place would face a significantly higher cost per guess depending on the quality of the mechanism used (think google recaptcha, which even as a human I frequently fail.)

If you're in a stateless scenario here you should probably be using a longer api key that would be much harder to brute force anyway.

Tarasovych commented 5 years ago

What if database will be used for rate limiting (as an out-of-the box option)?

|id|user_id|attempts|created_at|updated_at|

Workflow could be like that:

Sladewill commented 5 years ago

Say you get a DDoS, and you insert into your database? Seems like a really bad idea in that sense as you are now pushing the attack at your database. As some other people have suggest it may be better to handle this at a more hardware/firewall layer before it hits the application.

barryvdh commented 5 years ago

@donnysim

So does this mean that throttling for login is useless at the moment? Even if you're locked out, you'll still be logged in if you send the correct password? Getting too many attempts error is jus an another "Incorrect password" error?

Thats what I read at first also, but as far as I can tell you can still only try different accounts. You cannot try with the same username (/email) within the locked period.

DarkGhostHunter commented 5 years ago

I would vote to escalate to a pure IP block after more unsuccessful login attempts, but it seems like that adding reCAPTCHA to the IP requests rather than throttling is the best solution to stop these kind of attacks.

The block could be on a higher number of failed attempts, and the time to release the lock should be higher too. If a whole net gets throttled because someone decided to brute Force is way into the application, nothing is lost since is a problem of the user, not he application.

HDVinnie commented 4 years ago

Any news on this? Seems it should be priority. It seems odd its only locking out a IP to a specified username/email and not just IP itself. Botnets for instance. They can hit a username/email until its locked and then just move on to a new username/email since the IP is still good for use using a different username/email. For sites also using things like a Failed Login system that emails a user and logs in the DB failed login attempts your now also getting your DB hit a lot and emails too.

reCAPTCHA is not a good solution IMO when it comes to BotNets trying to breadth-first attacks a login page.

DarkGhostHunter commented 4 years ago

reCAPTCHA is not a good solution IMO when it comes to BotNets trying to breadth-first attacks a login page.

v2 or v3?

HDVinnie commented 4 years ago

Both have failed for me.

DarkGhostHunter commented 4 years ago

Both work wonders for me. Check your configs.

HenkPoley commented 4 years ago

Additionally, from an IPv6 source address, you can freely pick from 2^64 temporary addresses. Meaning a particular IP-address block, as currently implemented, is easy to circumvent.

PHPGuus commented 4 years ago

I came across the same situation albeit that my PEN testers did not complain about breadth-first... regardless, I needed a solution, so I ended up doing:

1) New ThrottleRequests middleware that acts upon response status code 401 2) Takes (separate) request signature and username hashes 3) Stores the hashes in a DB table using a sliding window, so there's always at most the amount of maximum hits in the table per hash key 4) Checks against config whether or not there were too many hits. If so, calculates the time in seconds between the oldest and the newest. 5) If there are more hits than allowed and if the period is shorter or equal to the period allowed (configured) the IP address on the request is banned. 6) Banning results in Log::critical and an email being sent (if mail and mailable for this solution are configured)

This is probably "quick and dirty" (when looking for a generic approach) and might need some more config possibilities (DB / Cache / Redis / memory / whatnot drivers; needs testing for performance; needs verification as to whether this can be applied to any route, rather than just the login route(s)).

If anybody wants to discuss this further, please contact me on twitter.

garygreen commented 4 years ago

I think the way Laravel does it at the moment does make sense, that is to throttle by ip+username. However in addition to this to prevent "breadth-first" maybe there should be a global throttle limit on the authentication route. It would be good to have Laravel set a sane default out the box.

In LoginController.php just add a global throttle limit in the construct method:

    /**
     * Create a new controller instance.
     *
     * @return void
     */
    public function __construct()
    {
        $this->middleware('guest')->except('logout');
        $this->middleware('throttle:20,60')->except('logout'); // global login limiter 20 login attempts every hr
    }
reinink commented 4 years ago

I agree @garygreen, a global throttle would be a simple way to mostly mitigate this risk. The rules would have to be pretty lax though, since you'd essentially be locking out all users. For example, instead of locking out all users for one hour after 20 failed login attempts, I'd be more inclined to lock out all users for 1 minute after 50 failed login attempts.

Going off what Cartalyst does, there could be three layers to this:

  1. Global
  2. IP
  3. User

It's helpful to actually think of these in the opposite order:

  1. User throttling: Most aggressive, since it only affects one account (email).
  2. IP throttling: A little less aggressive, since it only affects the users on a single IP address.
  3. Global throttling: Pretty lax, since it affects all accounts. This is really a catch-all, just incase someone is spoofing IP addresses, or a DDoS attack is occurring.

This is basically the same staggered approach @davedevelopment is suggesting, with the addition of a global throttle.

garygreen commented 4 years ago

Those options sound sensible @reinink - I did send a PR to Laravel some time ago for changing the throttler and suggesting ways for it to be more customisable in terms of how it throttles. Essentially just need to control the throttle signature - by default, Laravel restricts by user id OR ip.

You may find it interesting some of the ideas in the PR: most relevent comment

lynn-stephenson commented 4 years ago

This issue is still open, so I'd figure I throw in an idea.

Rate Limiting Authentication Attempts

The whole idea behind rate limiting is denying service (Denial of Service), given too many requests have been made within a certain amount of time. Unfortunately we have conflicting interests. We want to prevent malicious third-parties from gaining access to accounts, but we also don't want to deny service either.

Source:

It is difficult to act against adversary who tries a few “most common passwords” from many different sources against every account on the system, but we can focus especially on the “multiple source against single account” and “single source” threats.

By IP

Example: 24 invalid authentication attempts within a day are permitted to each IP.

Another example: 4 invalid authentication attempts within 4 hours are permitted to each IP.

  1. There are a lot of proxies available to everyone, meaning distributed attacks are possible.
  2. Because of the previous point authentic users could be also be attempting to authenticate through proxies.
  3. Additionally this isn't sufficient for mitigating targeted attacks.
  4. Because of the first, and previous point: rate limiting by IP alone isn't sufficient.

Once the IP has been locked, the IP can't be used to attempt to authenticate any accounts.

For a Specific Account

Example: Each account is allowed 24 invalid authentication attempts within a day.

Another example: Each account is allowed 4 invalid authentication attempts within 4 hours.

If the account is locked, it is protected against both targeted, and distributed attacks.

Globally

idea 1: round_to_discrete(invalid authentication attempts permitted per account * factor * number of accounts) is the total allowed amount of invalid AND valid authentication attempts per day. If the total has been reached, all accounts are locked. The factor can be adjusted, I'd say .125(an 8th) is good (probably not best to put it above .5). The lower the factor, the sooner all of the accounts are locked.

Example: 24 invalid authentication attempts permitted per account, 256,783 accounts registered, with a factor of .125 = 770,349 total permitted authentication attempts. This will give 3 attempts for all of the accounts.

idea 2: We could also declare a stricter rule, that about a 4th the number of accounts is the total number of invalid AND valid authentication attempts per day.

Example: 256,783 accounts, and a factor of .25 = 64,196 total authentication attempts.

Preventing Denial of Service

When an IP or account is locked, we(the application) have denied our authentication services. We do not want to deny service to our authentic users. We can send a temporary(an hour, or a few is good), and unpredictable(using a CSPRNG) authentication page to the user's email, if the account is locked, or the IP is locked. NOT a link that magically authenticates them.

So when clients try to login to a locked account, or they are using a locked IP don't attempt authentication. Simply ask for confirmation if they'd like a temporary login link sent to their email. These links are specific to an account, so users shouldn't need to provide an identifier(email, username, etc), or go through multiple page loads if there is multiple factors of authentication enabled on their account. They should only be able to request a new one every 15 or 30 minutes. Previous links should be deleted/invalid.

If accounts have been locked globally, make sure to tell users all accounts have been locked because too many authentication attempts have been made that day. And that it's in place to mitigate adversaries from trying to guess passwords.

The link MUST be:

The link SHOULD be:

Although the link is confidential, the account is not compromised if adversaries obtain it.

Public link:

https://site.com/login

Ephemeral, unpredictable, and confidential login form that bypasses locked account/IP link:

https://site.com/login/1159e55cb3093925d4716bac2f01b2c4dd2c88a0e56c484ec48c0767693c36c1
DarkGhostHunter commented 4 years ago

@lynn-stephenson So basically, if the IP is throttled, we send a login link with a CSPRNG that bypasses the throttle and allows to log in.

image

lynn-stephenson commented 4 years ago

@lynn-stephenson So basically, if the IP is throttled, we send a login link with a CSPRNG that bypasses the throttle and allows to log in.

image

If the IP, or account is locked(throttled), then yes. A link is sent to the email. Then the user clicks on the link, and provides their credentials.

Because there is a way to bypass throttling, I'd suggest making it strict.

DarkGhostHunter commented 4 years ago

If the IP, or account is locked(throttled), then yes. A link is sent to the email. Then the user clicks on the link, and provides their credentials.

Because there is a way to bypass throttling, I'd suggest making it strict.

Okay, but now the implementation becomes not-so-nicer. We would need to catch the User once is been retrieved (but not Validated) and check if the request is throttled or not. Then it would need to stop the authentication and redirect/show "Check your email to Login", or continue if the CSPRNG is present and correct -- the stopping can be done easily with throwing an HttpResponseException.

Should the controller be responsible of throttling the request, or the guard itself? If it's the controller, then the Session Guard should also have access to a throttle information of the Request itself. How it would know when to send the email?

In any case, for that to work, we would need to add a "Retrieved" event to the Session Guard, that would fire when the User is successfully retrieved from the UserProvider. With that information we can check if the user exists, since we need only to send him an email (if it's not, we fake it).

I don't know if @taylorotwell will be too keen to have another event in the Session Guard, but from my perspective, having that event could enable more advanced tinkering with the Guard itself.

lynn-stephenson commented 4 years ago

Should the controller be responsible of throttling the request, or the guard itself? If it's the controller, then the Session Guard should also have access to a throttle information of the Request itself. How it would know when to send the email?

I don't feel like this belongs in the Session Guard, as this doesn't have anything to do with sessions, and shouldn't be tied to the sessions. This is strictly for authentication rate limiting purposes. The rate limiting is per-IP, and per-user; not per-session. So I believe this should be a controller approach.

Adversaries could simply delete the session that's stored in a HTTP cookie, and have a new session. Easy to bypass session-based rate limiting. We don't want that.


Denial of Service Resistant Authentication Rate Limiting

Analogy: The login page is the front door. When the doorman decides to deny service because you attempted to enter too many times, or the list name had already been attempted too many times, he can send mail to the list name's house with a key that permits the doorman service for a specific list name, regardless of who you are(but you probably are the list name), as long as you have the key. Show the doorman the key, and he'll serve you. Once you've gone in through the front door successfully, then you've got a session.

This is essentially how it works, to prevent denial of service for authentic users.

DarkGhostHunter commented 4 years ago

We could tackle this with a throttler Middleware on the POST login route.

This Middleware should register a listener at run time to the Failed auth event in to check if the authentication fails. The data would be used to throttle the account, or the IP, depending on the case.

If the IP has only one throttled account, only that account will receive a login email. If the IP has more than one throttled account, then all accounts will receive a login email. The throttler listener will check if the user exists in the user provider (we would need an auth event called "Retrieved" too) but would interrupt the attempt and throw a a response telling an email has been dispatched. If the user doesn't exists, the same response would be sent but bogus.

dillingham commented 4 years ago

Maybe disable login by default unless a cookie is present containing global attempt count.

Maybe attempts count is stored in the database on the users table?

Maybe require email verification after 3 failed attempts?

Not sure, just spit balling

trevorgehman commented 4 years ago

I feel like you could simplify this by setting a cache key for the number of login attempts globally within X minutes. If more than X number of login attempts are made (regardless of credentials or IP), display and require a CAPTCHA for every login attempt.

That way:

DarkGhostHunter commented 4 years ago

I feel like you could simplify this by setting a cache key for the number of login attempts globally within X minutes. If more than X number of login attempts are made (regardless of credentials or IP), display and require a CAPTCHA for every login attempt.

That way:

  • You don't lock out legitimate users
  • You don't require CAPTCHA for every login attempt all the time
  • You don't need to worry if a DDOS attempt uses thousands of distinct IPs

That means tackling this with a package, which you can always do. The idea of this thread is to do something relatively simple and ootb inside Laravel.

jrking4 commented 4 years ago

I have used this package with success in replacing the native Auth Throttle trait: https://github.com/GrahamCampbell/Laravel-Throttle

taylorotwell commented 4 years ago

Closing as more flexible rate limiting is coming with: https://github.com/laravel/framework/pull/32726

reinink commented 4 years ago

Thanks so much @taylorotwell! Awesome work! 👏