Open bviviano opened 3 years ago
I've not set up LDAP etc in many years, but shouldn't this second factor be entirely on the LDAP server side?
Yes, what you describe should work, but has some drawbacks. How would it keep track of replay prevention, brute force prevention, backup codes, HOTP counter, etc…?
I've not set up LDAP etc in many years, but shouldn't this second factor be entirely on the LDAP server side?
That would be site dependent. In our case, we're only interested in using 2FA for su / sudo from PAM. Not SSH login access, web page, etc. To use something like
would mean that every login would require 2FA (escalated privileges or not). Some sites might have that level of security mandate, ours does not. Having the ability to insert a PAM module where needed that allows 2FA is what we're looking for.
replay prevention
I'm not aware of any Network based PAM authentication module having the ability to avoid replay prevention. If you're using Google Auth with local secret storage just to ensure replay prevention, nothing I am proposing would stop you from using it with local storage, if that is your security mandate.
What I am proposing just increases the flexibility of where the secret data comes from and gives the individual site more control. Maybe a site wants to store all $HOME/.google_authentictor files encrypted and keep the private key owned by root. Having the ability to arbitrarily read the .google_authenticator data from a cmd pipe gives that ability (for example). There are a number of situations where being able to process the secret file via a pipe'd command could be useful.
brute force prevention, backup codes, HOTP counter, etc
Good point. You'd either have to accept the limitations as if "allow_readonly" was set on the PAM module or include a method to update the network database. If the "secretcmd" option either includes a command line argument or sets an environment variable dictating if this is a "read", "write" or failed event then the "secrectcmd" program could act accordingly.
auth required pam_google_authenticator.so secretcmd=/sbin/authenticator_ldap
execv("/sbin_authenticator_ldap", "read") - Means read from external database and prints to stdout in expected .google_authenticator format, as if data was read from $HOME/.google_authenticator directly.
execv("/sbin/authenticator_ldap", "write") - Means read from stdin the new .google_authenticator data and writes it into the network database.
execv("/sbin/authentiicator_ldap", "failed") - Means the last 6 digit code failed, update network database failure count, lock network account, etc based on your security protocols.
If your 2FA is in the LDAP/whatever server and doesn't support replay protection, then it's broken.
But hmm… you're saying there no attribute that can be set by pam_ldap
that can be used by the server to only conditionally use 2FA?
I still don't think this sounds like a good idea. If you have 1000 servers with the same credentials, and default OpenSSH settings of allowing 10 logins per second, then without brute force protection someone can get in in less than a minute, on average.
Replay doesn't sound like a good thing to ignore either.
Anyway, I think the next step should probably be to design exactly what it is that you want, and how it actually defends against various attacks. And then write code. There could be a good implementation here somewhere, but I don't see anyone else writing it for you unless they also have a need.
If your 2FA is in the LDAP/whatever server and doesn't support replay protection, then it's broken.
Perhaps, I haven't investigated the slapd-totp module in much detail because of the issue that its either 2FA for all access or 2FA for none. That limitation makes it less appealing than being able to layer 2FA using this PAM module, where needed.
What the slapd-totp code does (like most "layered" 2FA solutions do onto older protocols) is the "PASSWORDXXXXXX" trick, where you type your password and the 6 digit code on the same password line and on the server side the slapd-totp module intercepts the clear text password and is smart enough to know how to split the password and 6 digit code to verify both.
you're saying there no attribute that can be set by pam_ldap that can be used by the server to only conditionally use 2FA?
LDAP (Lightweight Directory Access Protocol) has no concept of 2FA in its protocol. Additionally, LDAP has no idea if you're authenticating because your using sudo or because you're hitting a web page or because your just doing an "ldapsearch" for a user's email address. While an LDAP server and host can be configured to use SASL, and OTP can be configured with SASL, like the slapd-totp module for OpenLDAP, its an all or nothing setup. Every connection uses OTP or none do.
Additionally, a site could be using pam_mysql.so or pam_postgres.so or pam_pickyourpoison.so as their primary auth provider. A given auth solution may or may not offer 2FA built in, but a dedicated 2FA PAM module like this one will work with any of them, as part of a correctly configured PAM stack. That's why it's so useful.
If you have 1000 servers with the same credentials, and default OpenSSH settings of allowing 10 logins per second, then without brute force protection someone can get in in less than a minute, on average.
Sure, we have brute force protection on the password through the central LDAP server. 6 incorrect password's in a row total, from any of servers will lock the account, requiring human intervention from an Admin to unlock it. I thought what you where talking about was brute force on the 2FA code. How do you protect against a setup where the username/password is known but the 2FA isn't and you can ping 1000 servers in parallel in under 90 seconds (assuming WINDOW_SIZE 3).
Anyway, I think the next step should probably be to design exactly what it is that you want
Is there something specific in my last comment in regards to passing "read", "write" or "fail" to the pipe command you think wouldn't work?
There could be a good implementation here somewhere, but I don't see anyone else writing it for you unless they also have a need.
I'm not looking for anyone else to write the potential pipe command that gets called to connect to LDAP/MySQL/etc, I'm just looking for an extension to the PAM Google Authenticator module that would allow it to read/write the .google_authenticator data through a pipe, instead of directly to/from a file. Once that's in there and there is a defined API for passing data to/from a pipe command, I can design a schema extension for LDAP and write the code that talks to an LDAP server (for my needs).
It's not even feasible to run the LDAP server on two ports, with PAM configured to talk to one for sudo, and one for logins?
Sure, we have brute force protection on the password through the central LDAP server. 6 incorrect password's in a row total, from any of servers will lock the account, requiring human intervention from an Admin to unlock it.
What it mean is that if you move the actual authentication of that to the PAM side, then from the LDAP server's point of view it all looks like good password, good logins. It's only the individial login server that knows that with the secret downloaded from LDAP, the OTP actually did not match.
I think the OTP read/write has a race condition in it for OTP reuse.
I'm not looking for anyone else to write the potential pipe command that gets called to connect to LDAP/MySQL/etc, I'm just looking for an extension to the PAM Google Authenticator module that would allow it to read/write the .google_authenticator data through a pipe, instead of directly to/from a file.
Sure, but are you saying you'll implement that in PAM GA?
I'm actually wary of accepting such a PR. Maybe I can be convinced. But if were to introduce yet another foot-gun that introduces a subtle attack vector then I'm not so sure.
Currently GA takes great care to not allow a simultaneous-login attack. The interface it would need is not a mere read/write, but maybe "read" and "compare and swap". And has to return error if not equal.
Also if I understand your proposed solution it would make it more complex to reason about the integrity of the authication system. I.e. if secrets leave the LDAP server, that means that a single login on a compromised server will compromise the 2FA for the whole network. If I understand you correctly it should even be possible for a compromised host to dump ALL 2FA secrets from the LDAP server.
Security wise it sounds like, even if it's read/cswp, the security model would be similar to if all servers mounted an NFS mount with the secrets on it.
You could even write a fuse FS to not drag NFS into it.
How do you protect against a setup where the username/password is known but the 2FA isn't and you can ping 1000 servers in parallel in under 90 seconds (assuming WINDOW_SIZE 3).
Centrally in the LDAP server.
I'd like to implement the Google Authenticator PAM module for sudo in my infrastructure, but the lack of a simple method to store the secrets for users on a central location (LDAP, Galera, etc) makes it a non-starter when you have 1000's of systems.
I've been looking over the various issues and see where it's been requested a few times to be able to read a secret for a given user from LDAP, MySQL, etc. The general answer from the developers has been it's too complicated to implement such a feature and they do want to not get tied into the specific details of LDAP, MySQL, etc. Which I understand.
Therefor, I am requesting a more generic ability, to read the "secret" from a command pipe. I think that is a better way of doing it. This is exactly what SSHd does for reading public keys, using the AuthorizedKeysCommand directive:
If the Authenticator PAM module supported a similar capability (i.e. serectcmd=) to execute a command to return the secret key (instead of just reading it from a file), then any number of helper programs could be written by 3rd parties to allow it to pull the secret from LDAP, MySQL, PostGres, HTTPS, etc based entirely on the specific environment it was needed in.
Anyway, I think this would be an excellent enhancement to the PAM authenticator module and solve many of the issues related to deploying at scale.