bastillion-io / Bastillion

Bastillion is a web-based SSH console that centrally manages administrative access to systems. Web-based administration is combined with management and distribution of user's public SSH keys.
https://www.bastillion.io
Other
3.19k stars 382 forks source link

Lean 240MB docker image available #103

Closed garywiz closed 8 years ago

garywiz commented 9 years ago

We have a few enterprise clients that want to use KeyBox (thank you!). So, we put together an enterprise-ready docker container that has quite a few features and advantages, and hopefully does service to this great project. Here are some of the features:

  1. Very lean build. The image is 240MB compared to some builds which are over 700MB.
  2. Robust startup environment. Creates self-signed SSL keys, plus checks for service availability before deeming the container as officially started.
  3. Completely configurable using environment variables, including all current KeyBox options
  4. Automatically maintains and rebuilds the Jetty keystore, so it's easy for people to add their own SSL certs.
  5. Can work both in self-contained mode and with Docker --volumes-from or -v so that persistent data (including the database and keystore) are stored outside the container.

The image and full documentation are here: https://github.com/garywiz/docker-keybox

I confess I'm no Java expert, and one thing that was a bit daunting in building this image was trying to make sure that the entire KeyBox hierarchy was read-only. It seems Java (or Jetty?) has a propensity to want to write directly into the source tree, so I had to create some workarounds to make sure that all persistent data was stored separately.

I hope this is a good contribution.

skavanagh commented 9 years ago

This is great! Thanks @garywiz

garywiz commented 9 years ago

I just updated the Docker Hub version of docker-keybox as well as the docker-keybox Git page and documentation to reflect your new release.

I added environment configuration variables for CONFIG_AUDIT_LOG_APPENDER as well as CONFIG_SERVER_ALIVE_SECS to support new features, as well as assured that the container is running Oracle 8u60 (the latest).

Your documentation indicates that the log4j.xml file may need to be tweaked for log configuration. Aside from the "warn" level, are there other common customisations that I should export so the Docker user can modify them without having to dig inside the container?

We use this a lot, and it's been going very well, thank you.

skavanagh commented 9 years ago

Basically you can send the audit logs for the terminal sessions to a logging utility instead of the H2 DB.

https://github.com/skavanagh/KeyBox/blob/master/src/main/java/com/keybox/manage/util/SessionOutputUtil.java#L141-L143

You would setup an appender in the log4j.xml file and then set this property to the logger name

https://github.com/skavanagh/KeyBox/blob/master/src/main/resources/KeyBoxConfig.properties#L22

There should be an example commented out in the log4j.xml file

foobarto commented 8 years ago

Hey @garywiz great work, thanks!

One request though - could you make your image to be an automated build on Docker Hub? Call me paranoid if you wish but I'm avoiding images that are not automatic builds (can't really trust them :) )

garywiz commented 8 years ago

We used to do automated builds and stopped a few months ago for reasons I'll explain. I would be curious what else we can do to make our images "trustworthy", such as uploading using signed certificates, etc.

The problems we had were as follows:

  1. We could not verify that the final build was indeed correct before it would go live. We had several images that failed for various reasons that were not detected by the automated build process. For example, Oracle's site temporarily returned an incorrect download, and we ended up building the image with the wrong version of Java, and it got released to clients. Testing inside the build scripts did not really work either, as it wasn't possible to confirm that the ENTRYPOINT and CMD setup actually functioned properly (for examples of this, google "Docker automated build failures" or take a look at threads like this which recommend doing manual testing, which pretty much invalidates the whole concept).
  2. Once we received notice that a build was incorrect for any reason, it became difficult to rollback the build on Docker Hub without actually applying a git undo patch to the git repo, which completely messed up development processes.

Our images are quite sophisticated, and handle a lot of situations, and we test them thoroughly and don't like to test a version unless it is the exact version people will be using, not one which is "nearly" the exact version. So, we have a QA step where we use our own servers to build the final image, run it through it's paces using various different input configurations with a CentOS, Ubuntu, and CoreOS host, and once the image passes QA, we upload the exact image that we did QA on.

Personally, I don't know how any Enterprise could possibly use a deployment vehicle which publishes a built image without having a testing step, and we don't exactly feel like tailoring our entire development methodology around the limitations of Docker Hub. For example, we could set up Docker Hub triggers which test the image once it has already been published, and automatically roll it back if there are problems. However, it seems like a terrible hack to actually release a malfunctioning image to the public even for the time needed to have Jenkins test it and roll it back.

Very interested in your perspective however. As our images are probably tested much much better than most we have used ourselves, I want to do everything possible to assure they are trusted and if we have to produce automated builds for the public to get that trust we will, and simply not allow our Enterprise users access to the public ones any longer.

Note that Docker themselves use a much more sophisticated build process using bashbrew, and the official images are not auto-built. This is telling.

foobarto commented 8 years ago

I understand and respect what you are doing @garywiz I see your point and I completely agree with it when it comes to enterprise deployment. In my experience the set up in an enterprise would include a private docker registry which would be used internally for all the images. The issue of trusting the image in such scenario is much easier than it is on the Internet. You are using Docker Hub to store the images containing open source software - which is great way of contributing back to the community. We choose to trust companies, organizations and individuals that build binary images for us when we download OS and packages for them - but all we really have is their word that what is shown to be in the source code is really in the binary form. It's all about the reputation of whoever builds the image. I choose to trust the build automation of Docker Hub and the official images, when someone uploads binary image only they are asking everyone to trust them.

There are ways to use automatic builds on Docker Hub and not sacrifice the testing of these images. One solution would be to treat the "latest" tag same way as the "master" branch on git - a work in progress that needs to be proven that is stable. I know that there are two main schools of thought here - one treating master/trunk as the stable code and the other to treat it as the live code that needs to be tested and approved. Once the "latest" image is tested you could always tag it as "stable" or "tested" or with some specific version number or a date. It's the same image built with Docker Hub automation but tested and approved by your test workflow.

garywiz commented 8 years ago

Thanks foobarto. Good reply. All our enterprise clients do in fact use private repo storage, but until now our build processes are the same for both private and public builds. Our public builds aren't really used that much, but we'll consider how to make them automatable (right now, the build process doesn't work as an automated build because of the way we dynamically construct the build environment). I'll keep this issue open and post an update within a week or so. I suspect we can at least make sure the KeyBox image can be an auto build.

garywiz commented 8 years ago

@foobarto (and others)... The Docker Hub docker-keybox Image is now an automated build based upon the 'latest' tag in the docker-keybox github repo. I'm curious to see if this increases overall trust in the image by those who need it.

Note, was also updated to the most recent release.

--- rant follows, you can ignore ---

This is a bit of an experiment for us. First, I had to contact Docker because of a bug in the way they do tag parsing. Versioned tag builds were not being recognized and I had to change the tag format from "v2.84.01" to "2.84.01" because the leading 'v' was causing problems. Then, since there is no way to rename hub repositories, only delete them (incredible to me), I had to delete the old repo, meaning everybody who had starred it lost their stars. Then, in triggering builds, the amount of time for a build varied between 5 minutes to almost 45 minutes at times.... with no feedback whatsoever from the interface as to what is going on or why. And, during that time, the repo was unavailable for pull!

If this increases trust, great, but we have been doing Enterprise-level development for 20 years, and if somebody asked me "Is Docker ready for prime time", I would yell "No" to the highest hills. :-)

(Sorry, just had to say this. I'm dying to be proven wrong.)

SamMorrowDrums commented 8 years ago

Just for reference building with docker-compose, replace <user>, <uid>, <host> with their actual values:

keybox:
  image: garywiz/docker-keybox 
  command: --create-user <user>:/apps/var:<uid>
  environment:
   - CONFIG_LOGGING=stdout
   - CONFIG_AUTHKEYS_REFRESH=120
   - CONFIG_ENABLE_KEY_MANAGEMENT=true
   - CONFIG_OTP=optional
   - CONFIG_ENABLE_INTERNAL_AUDIT=false
   - CONFIG_DELETE_AUDIT_AFTER=90
   - CONFIG_AUDIT_LOG_APPENDER=""
   - CONFIG_FORCE_KEY_GENERATION=false
   - CONFIG_SERVER_ALIVE_SECS=60
   - CONFIG_EXT_SSL_HOSTNAME=<host>
  ports:
   - "8443:8443"
  volumes:
   - /home/<user>/docker-keybox-storage:/apps/var