yakworks / docker-sftp

sftp with fail2ban and kubernetes support
https://hub.docker.com/r/yakworks/sftp/
MIT License
6 stars 5 forks source link
docker fail2ban kubernetes sftp sftp-server

SFTP

Forked from atmoz to make it easier to setup on kubernetes and share a volume to a group of people. adds fail2ban from this pr. merges in a number of PRs to fix a number of issues

Docker Cloud Build Status Docker Cloud Automated build Docker Pulls

<img src="docs/openssh.png" title="A cute kitten" height="80" /> <img src="docs/docker-logo-png-transparent.png" title="A cute kitten" height="80" /> <img src="docs/wordpress-kubernetes.png" title="A cute kitten" height="80" />

Supported tags and respective Dockerfile links

Securely share your files

Easy to use SFTP (SSH File Transfer Protocol) server with OpenSSH. This is an automated build linked with the debian repositories.

Quickstart Example

To run the opionionated example in this project ./examples/docker/docker-run.sh See README there for more info

Summary

Simplest docker run example

docker run -p 2222:22 -d yakworks/sftp foo:pass

The OpenSSH server runs by default on port 22, and in this example, we are forwarding the container's port 22 to the host's port 2222. To log in with the OpenSSH client, run: sftp -P 2222 foo@127.0.0.1

User "foo" with password "pass" can login with sftp and upload files to the default folder called "data". No mounted directories or custom UID/GID. Later you can inspect the files and use --volumes-from to mount them somewhere else (or see next example).

NOTE: in this example Fail2Ban will probably fail as it needs the NET_ADMIN capability

Volume DATA_MOUNT env

Opinionated defaults. This sets users and groups in a number of ways that attempt to make sharing files cleaner across multiple users

set the DATA_MOUNT environment var to the dir that was mounted in volumes.

Data Volume Examples

Let's mount a directory and make a user and staf owner with UIDs as well.

mkdir -p target/sftp-vol/private
mkdir -p target/sftp-vol/public

docker run --cap-add=NET_ADMIN --cap-add=SYS_ADMIN \
  -e DATA_MOUNT=/sftp-data \
  -v $(pwd)/target/sftp-vol:/sftp-data \
  -p 2222:22 -d yakworks/sftp \
  owner1:pass::staff user1:pass::users:/sftp-data/public

this will create a Both users will have a home dir in their root dir when they login. This will point to the target/yakbox-sftp/home/:user that was created for them. In this example when they owner1 user sftp's in they will have a sftp-data dir that is essentially mapped to the target/sftp-vol dir via the DATA_MOUNT share and can see every thing.

The user1 will end up having a home dir and because of the user config will also be able to see sftp-data/public that is mapped to target/sftp-vol/public

Also, go ahead and try out fail2ban. enter 5 bad logins and see what happens.

User Credentials

users.conf

echo "
owner:123:1001:staff
bar:abc:1006:100
baz:xyz:1098:users
" >> target/users.conf

docker run --cap-add=NET_ADMIN --cap-add=SYS_ADMIN  \
    -v $(pwd)/target/users.conf:/etc/sftp/users.conf:ro \
    -v $(pwd)/store/onebox-sftp:/data \
    -p 2222:22 -d yakworks/sftp

note: 100 is the users group so either id will work or name

In this example it will create

Encrypted passwords

Add :e behind password to mark it as encrypted. Use single quotes if using terminal.

docker run --cap-add=NET_ADMIN --cap-add=SYS_ADMIN \
    -v /host/share:/home/foo/share \
    -p 2222:22 -d yakworks/sftp \
    'foo:$1$0G2g0GSt$ewU0t6GXG15.0hWoOX8X9.:e:1001'

Tip: you can use atmoz/makepasswd to generate encrypted passwords:
echo -n "your-password" | docker run -i --rm atmoz/makepasswd --crypt-md5 --clearfrom=-

User SSH pub keys

Option 1 - volume mapping each user key

Mount public keys in the user's .ssh/keys/ directory. All keys are automatically appended to .ssh/authorized_keys (you can't mount this file directly, because OpenSSH requires limited file permissions). In this example, we do not provide any password, so the user foo can only login with his SSH key.

docker run --cap-add=NET_ADMIN --cap-add=SYS_ADMIN \
    -v /host/id_rsa.pub:/home/foo/.ssh/keys/id_rsa.pub:ro \
    -v /host/id_other.pub:/home/foo/.ssh/keys/id_other.pub:ro \
    -v /host/share:/home/foo/share \
    -p 2222:22 -d atmoz/sftp \
    foo::1001

Option 2 - mapping volume with all keys

This is best option for kubernetes as you can easily create a secret configmap with each users public key and map to that secret as a volume create a volume mapping to /etc/sftp/authorized_keys.d. On start the container will spin through all pub files in that directory and add automatically appended to /home/:user/.ssh/authorized_keys. format should be :user_rsa.pub. For example, if you have a directory called secure/ssh-user-keys with the key files ziggy_rsa.pub and bob_rsa.pub

docker run --cap-add=NET_ADMIN --cap-add=SYS_ADMIN \
    -v secure/ssh-user-keys:/etc/sftp/authorized_keys.d \
    -p 2222:22 -d atmoz/sftp \
    ziggy::1001 bob::1002

The examples/docker shows this in action

Providing server SSH host keys (recommended)

For consistent server fingerprint, mount /etc/sftp/host_keys.d with your ssh_host_ed25519_key and ssh_host_rsa_key host keys. OpenSSH seems to requires limited file permissions on these as well so in init it will copy the files from host_keys.d to etc/ssh.

If not then this container will generate new SSH host keys at first run. To avoid that your users get a MITM warning when you recreate your container (and the host keys changes), you can mount your own host keys.

Tip: you can generate your keys with these commands:

ssh-keygen -t ed25519 -f ssh_host_ed25519_key < /dev/null
ssh-keygen -t rsa -b 4096 -f ssh_host_rsa_key < /dev/null

or using this docker image itself

mkdir keys

# runs the image and copies the keys out to use then exits
docker run -it --rm -v $(pwd):/workdir yakworks/sftp \
cp /etc/ssh/ssh_host_ed25519_key* /etc/ssh/ssh_host_rsa_key* /workdir/keys

Execute custom scripts or applications

Put your programs in /etc/sftp.d/ and it will automatically run when the container starts. See next section for an example.

Bindmount dirs from another location

If you are using --volumes-from or just want to make a custom directory available in user's home directory, you can add a script to /etc/sftp.d/ that bindmounts after container starts.

#!/bin/bash
# File mounted as: /etc/sftp.d/bindmount.sh
# Just an example (make your own)

function bindmount() {
    if [ -d "$1" ]; then
        mkdir -p "$2"
    fi
    mount --bind $3 "$1" "$2"
}

# Remember permissions, you may have to fix them:
# chown -R :users /data/common

bindmount /data/admin-tools /home/admin/tools
bindmount /data/common /home/dave/common
bindmount /data/common /home/peter/common
bindmount /data/docs /home/peter/docs --read-only

NOTE: Using mount requires that your container runs with the CAP_SYS_ADMIN capability turned on. See this answer for more information.

Kubernetes

See the example in kubernetes dir examples/kubernetes/README.md

One way to keep this straight forward is to map secrets and their keys as volume files. Then users and the ssh keys can be mapped per the examples

NFS

We setup a share with nfs and map volume to that

DNS and Label Node for nodeSelector

We label a node for the sftp for fail-to-ban, doesn't really matter which one so just pick one and use that IP.

  1. kubectl label nodes node123 sftp=someName

  2. Add an entry to the DNS to point ninebox.9ci.dev or whatever name we use to the node's external ip. we do this so its straight through and fail2ban can pick up the ips and do its thing

From Scratch

If haven't set on up yet then

  1. run ./keygen.sh. This ran the docker to generate the keys and copies the secret-host-keys.yml and secret-user-conf.yml into keys dir. The keys dir should be in .gitignore and should not be checked in to github.

  2. Edited keys/secret-user-conf.yml to add or update users and set to initial secure passwords.

    NOTE: UPDATE THE pAsWoRd to something secure, DO NOT RUN IT AS IS! This is just a githuib checked in example

  3. Edit keys/secret-host-keys.yml to copy from the generated key files ssh_host_ed25519_key, ssh_host_ed25519_key.pub, ssh_host_rsa_key, ssh_host_rsa_key.pub into the stringData section of the yml

  4. create keys/secret-user-keys.yml with our ssh keys so they are already there and we dont need to login

  5. edit the sftp-deploy so nodeSelector to a specific node as we will create a dns entry for its ip. again, if we dont let straight in then we can take advantage of the fail2ban thats in the sftp image, which prevent the onslaught of login attempts.

Existing

  1. run make secrets.decrypt.sftp to decrypt secret-conf-keys
  2. modify sftp-deploy.yml and change nodeName: lke31075-46747-60e7dce17c1b to whatever the node is in the cluster where you attached the label.
  3. Make sure you assigned label to node above and did the DNS entry

Adding a user to the sftp server

  1. see above for descrypting secret-conf-keys
  2. modify secret-conf-keys.yml and change the sftp-user-keys section with current information.
  3. Test the key before committing (see below)
  4. run make secrets.encrypt.sftp to encrypt the secret files.
  5. Commit, push and pull request.

To test the key before committing,

  1. Locate the pod (dev-barn->storage->resources->secrets->sftp-user-keys)
  2. Edit the sftp-user-keys (3 dots, edit)
  3. Add or remove as necessary
  4. save
  5. go to resources->workloads
  6. redeploy the sftp server.
  7. When the new pod becomes available, look at its public ip address
  8. sftp -P 9922 foo@122.34.56.78
  9. If you get in without a password, it worked.

Fail-to-ban

Fail-to-ban is built into this. In order for it to work is has to know the IP of the user. If we let it go through a general load balancer it proxys the IP and cant know the ip to ban, so we assign a specific node with nodeSelector and then route the DNS entry to it so fail-to-ban can do it stuff and block the cockroaches. We do this so traffic can get pushed to fail-to-ban. If it goes through the router/balancer then it mask the ip and defeats the purpose of failtobam, see readme in sftp.