docker-mailserver / docker-mailserver-admin

A sidecar container for management tasks of docker-mailserver
MIT License
22 stars 10 forks source link

App Skeleton #3

Open NorseGaud opened 3 years ago

NorseGaud commented 3 years ago

Related: https://github.com/docker-mailserver/docker-mailserver-admin/issues/1


Goal: Generate the app skeleton which will eventually run the API

NorseGaud commented 3 years ago

Hey @LukeMarlin , what is the language/tool you're choosing for the app and API?

LukeMarlin commented 3 years ago

Python with FastAPI, as discussed in other issues. Most people leaned toward python so I'm pretty sure it's the best choice (possibly more contributors afterwards!).

When it comes to the framework itself, I'm more experienced with Flask, but FastAPI has a lot of nice things (like type checking, automatic "console" to explore the API) so I wanted to use it. I don't think we're going to have very specific development needs anyway, so I'm pretty sure any (light) framework will do just fine.

When it comes to project/packaging management, I'm going to use Poetry, and for testing management I'd go with Tox. These are tools that I'm using quite a lot at work and they've proved very nice so far.

As said before, if anyone has strong feelings against a tool, let's discuss this in this issue!

NorseGaud commented 3 years ago

Most people leaned toward python so I'm pretty sure it's the best choice (possibly more contributors afterwards!).

Two of the most active developers don't either like Python or have much experience with it (myself making it three). I'm curious how active @DerOetzi and @simonwiles will ultimately be to help (please chime in).

What would objections be to something like golang?

Lastly, I will say that FastAPI looks sweet and I'd be willing to learn more Python to get a feel for it. Just worry about maintainability for folks who are consistently active in the repos.

DerOetzi commented 3 years ago

I have more experience with Flask myself, but trying to switch to FastAPI with another project. For this I have the time to give some discussion input and review PRs, but I don't have time to develop the service by myself.

LukeMarlin commented 3 years ago

@NorseGaud Valid points, I mentioned people from the thread indeed, as it is a sidecar, so I hope your team won't be bothered too much with it.

I don't think golang is unfit to do this, plenty of libs exist already for APIs, but what I like about python for such simple jobs is how the code will be understandable to anyone, even non-python devs. It also mean that when it comes to small bugfixes, mostly anyone could do it.

Obviously the choice is up to you guys, I however don't intend to dive more into golang than I have (which isn't much), as I'm already trying to learn Rust on the side!

If you wish, you can even decide once you've seen the skeleton + first route: if it looks too complex to maintain in case external people bail out, feel free to choose another way, it would be understandable ;)

NorseGaud commented 3 years ago

For sure, please proceed. As long as it works, I'm ok with it :)

thehunt100 commented 3 years ago

I'm very interested in this. Did someone already start building? And is there anyone that is taking the lead in this project? I saw that there was still no decision made between Flask and FastAPI. I would be interested in contributing to a FastAPI application.

LukeMarlin commented 3 years ago

@thehunt100, since there's no voice clearly against, I started with fastAPI. Going to push the draft skeleton soon.

thehunt100 commented 3 years ago

OK, great, is there anything you need help with?

LukeMarlin commented 3 years ago

Opened #4, which shows how I'd organize the app, feel free to comment about python stuff there! There still much to do, but the most important part is to first know how to call the bash scripts. We can't just use setup.sh, because it is meant to be ran on the host machine, not inside a container.

My current idea, that I'll try tomorrow, would be to build this Dockerfile on top of docker-mailserver's so that all scripts are loaded and necessary packages installed. Then provide a docker-compose that gives access to postfix-accounts.cf, enabling us to call updateuseremail... One issue being that the image will be way bigger than a simple python one... Any other ideas/pointers welcome here :)

NorseGaud commented 3 years ago

My current idea, that I'll try tomorrow, would be to build this Dockerfile on top of docker-mailserver's so that all scripts are loaded and necessary packages installed. Then provide a docker-compose that gives access to postfix-accounts.cf, enabling us to call updateuseremail... One issue being that the image will be way bigger than a simple python one... Any other ideas/pointers welcome here :)

I think this is a great idea. What we can do is PR in docker-mailserver to split the dockerfile into multiple layers so that you only build from what you need to execute the various inner-scripts.

thehunt100 commented 3 years ago

@LukeMarlin I struggled with accessing the config files and scripts in my current solution, and I came up with some unconventional solutions.

In my situation, it was important to separate the admin part from the server part since I don't maintain the docker mailserver project and didn't want update conflicts.

I didn't want to let the admin run with mailserver editing permissions and be directly accessible from the internet in case of a security problem.

So the solution I came up with was to create a /config/run directory inside the docker volume.

The admin now creates the commands to execute on the mailserver and saves them as a file in the /config/run dir so a cronjob or inotifywait powered script with the right permissions can execute them.

This setup has a few added benefits.

It is straightforward to create an audit log to see when and who executed commands.

The admin can now run in its container with a low permission level since it only needs to access the /config/run for security and maintenance benefits.

Also, it is now easy to test the admin code since you only need to check the outputted commands.

It allows for extra validation on the server side, so the script that executes the commands can catch bugs or security problems.

I realize that this strong decoupled setup also has some downsides. For example, there is no direct feedback to give to the user. I do think this is possible when you run the command executer at a high frequency or on the command creation event with something like inotifywait. Also, to get the server's current state, it needs to provide that in the run dir, which is possible by copying the config files after they have changed. For my use case, this was not a big problem.

LukeMarlin commented 3 years ago

I think this is a great idea. What we can do is PR in docker-mailserver to split the dockerfile into multiple layers so that you only build from what you need to execute the various inner-scripts.

Still, the srcipt to change password relies on dovadm pw, which, as far as I know, needs the complete dovecot package. It would actually be trivial to implement that in python, and the docker would just need access to postfix-accounts.cf file... But that might not be a good long-term solution.

The admin now creates the commands to execute on the mailserver and saves them as a file in the /config/run dir so a cronjob or inotifywait powered script with the right permissions can execute them.

So, you actually created .sh files that contained setup.sh commands, and configured the host machine to run them, correct? While I think this is a clever way of solving the access issue, I wouldn't say this have any impact on security, as a compromised admin could create any script in there anyway. Also, that does require some configuration on the host machine, which might not be what the team had in mind when mentioning a sidecar container

thehunt100 commented 3 years ago

I save the commands as normal text files. Then I let the execute script parse them and validate the commands. Only predefined (whitelisted) scripts and commands will be allowed. You can run the execute script inside the container. In that case, you could copy the script into the container, but you could also do it on the host system like the setup.sh is doing now. I realize that this is an unconventional setup. If you control the whole project, a tighter coupling might be the preferred way to go.

NorseGaud commented 3 years ago

@LukeMarlin, I had envisioned this as running alongside a fully configured docker-mailserver container. That's why I said splitting the layers up a bit more in the mailserver repo so the docker tag layers are already on the host and can be re-used for the admin container without a ton of extra disk space.

How about the API runs on the same host as the mailserver and the UI can be placed anywhere the administrator wants?

plittlefield commented 3 years ago

That gets my vote!

LukeMarlin commented 3 years ago

How about the API runs on the same host as the mailserver and the UI can be placed anywhere the administrator wants?

If you mean running in the same container, I think that this might be the most efficient option. That way it's easy to use the scripts, and the API could still be optional (i.e: docker-mailserver:core & docker-mailser:apified). However from what I recall from the original thread, this wasn't popular among maintainers right?

If you mean on the same host in another container, sure, having more layers will help when it comes to disk usage. But the API will still probably require much more RAM than it should. On the other hand, would it be a viable solution for API other scripts in the future? For account related stuff it's just about a file that is easily shared, but I don't know all the possibilities of setup.sh, maybe some functions need to run on the target container?

NorseGaud commented 3 years ago

Options we have available to us (not exhaustive):

  1. Separate container
    1. Something docker-in-docker? docker-in-docker (not likely a real solution, just referencing it here for those that are smarter and more experienced than I am)
    2. runs on the same host as docker-mailserver (core)
      1. docker-mailserver has a script that will load/start/manage any scripts in a certain folder (mounted). This is actually neat to think about
      2. built off of and uses the same docker tag as core (minimizing disk usage for any deps specific to docker-mailserver-admin)
        • mounts config files so that it can execute ./setup.sh or even the various scripts directly
      3. runs on a light-weight container tag
        • mounts in the postfix-accounts file that when updated, triggers the docker-mailserver check-for-changes to do its thing.
      4. runs on a light-weight container tag
        1. Both the docker-mailserver and docker-mailserver-admin mount a file on the host
          • A script runs inside of the docker-mailserver to watch this file (it can be included inside of docker-mailserver-config-chksum), and if changed, will execute the command given (like executing updatemailuser)
  2. Same Container
    1. Add to docker-mailserver (this was ruled out, just want it here for reference) similar to how ELK was: https://github.com/docker-mailserver/docker-mailserver/pull/1614/files
    2. update core repo's Dockerfile to run a cron (?) that looks for any mounted addons directories (requires further discussion) and then loads them if found (loading the service)

Am I missing anything?

NorseGaud commented 3 years ago

Hey @LukeMarlin , I just updated the list of options available. Let me know if anything is missing.

LukeMarlin commented 3 years ago

Yes, as hinted above, we could make this image derive from core; not meant as a sidecar, but as a replacement. So basically the project would provide two images: docker-mailserver:core-xxx and docker-mailserver:apified-xxx. The later including python stack and API code. This way there is no overhead on core, nothing needs to be added and it's very simple for people to swap from one to another as they see fit. I might be wrong there though, maybe it wouldn't be that practical to do, or develop

Now, my opinion on the list:

1.i. yeah I'm not so sure about this, sounds scary and brittle (might be wrong, not a sysadmin on docker guru) 1.ii.a. while this sounds interesting in general for core, I'm not sure how this applies here. It's not just a bash or perl script, we need an entire language installed, then some libraries to be able to run. Maybe I'm not getting this right though :) 1.ii.b. yes, that would work, but compared to completely replacing core, we're requiring one more container so it sounds like it's less ideal 1.ii.c. that covers only the commands related to accounts. While nothing else has been asked for now, maybe we can keep other doors open? 1.ii.d. this does work as pointed by @thehunt100. Regardless of potential security issues, it does changes the commands from sync to async, and also I'm not sure it would be possible to get command results/output 2.i. :) 2.ii. Looks like 1.ii.a, maybe i'm really not getting how it would look like, because I don't have thoughts on it!

NorseGaud commented 3 years ago

Roger that, thanks for the reply to the points! I personally much prefer running this as a separate container, but we can always hash that out later once it's functional. I don't have any immediate problems with just creating and using an "apiified" tag to run. I worry more about scaling the API/UI for a production setup.

Updated:

  1. Separate container
    1. Something docker-in-docker? docker-in-docker (not likely a real solution, just referencing it here for those that are smarter and more experienced than I am)
    2. runs on the same host as docker-mailserver (core)
      1. docker-mailserver has a script that will load/start/manage any scripts in a certain folder (mounted). This is actually neat to think about
      2. built off of and uses the same docker tag as core (minimizing disk usage for any deps specific to docker-mailserver-admin)
        • mounts config files so that it can execute ./setup.sh or even the various scripts directly
      3. runs on a light-weight container tag
        • mounts in the postfix-accounts file that when updated, triggers the docker-mailserver check-for-changes to do its thing.
      4. runs on a light-weight container tag
        1. Both the docker-mailserver and docker-mailserver-admin mount a file on the host
          • A script runs inside of the docker-mailserver to watch this file (it can be included inside of docker-mailserver-config-chksum), and if changed, will execute the command given (like executing updatemailuser)
  2. Inside docker-mailserver repo and container
    1. Add to docker-mailserver (this was ruled out, just want it here for reference) similar to how ELK was: https://github.com/docker-mailserver/docker-mailserver/pull/1614/files
    2. Inject program for API into supervisor conf
  3. Use FROM mailserver/docker-mailserver:latest and inject API into the supervisor conf
    FROM mailserver/docker-mailserver:latest
    COPY ...
    RUN (Append our API's "program entry" to target/supervisor/conf.d/supervisor-app.conf)
    WORKDIR /
    EXPOSE 25 587 143 465 993 110 995 4190 (API PORT)
    ENTRYPOINT ["/usr/bin/dumb-init", "--"]
    CMD ["supervisord", "-c", "/etc/supervisor/supervisord.conf"]

Side note -- The python version in the container is:

root@mailserver:/# python --version
Python 2.7.16

Are you planning on adding a requirements.txt for us to pip install with?

LukeMarlin commented 3 years ago

I'm using poetry for requirements. The version of python used in the draft is 3.9, but we should be able to use whatever python3 version that is on the docker (it's probably 3.7+, try python3 --version)

NorseGaud commented 3 years ago
root@mailserver:/# python3 --version
Python 3.7.3
polarathene commented 3 years ago

Just briefly chiming in as I saw this referenced from an issue I was removing a stale tag from today.

I had a response typed out weeks or so ago on the original discussion thread that I never got around to finishing. I shared mostly the same opinion as other maintainers that such a feature should be a separate project/container, but I wasn't against the main projects docker builds including a minimal API binary (eg rust compiled via CI on a separate repo).

That allows for anyone to build a separate container such as on alpine and add in their own public API or admin UI where any auth is implemented and TLS out of the server is taken care of. The main docker image would just be providing an minimal API to provide functionality like the bash scripts do, but imo more reasonable to interact with.

I suppose layering the API on top of the main image as a base works too. I recall a discussion about manipulating files directly instead of using the bash script functionality, and having a concern about changes such as passwords (which the original discussion was focused on).. the API discussion was suggesting pre-hashing passwords which IIRC would not work well if we changed from SHA512-crypt to a different hash at another point (new releases of distributions are shifting to yescrypt I think).


Seemed better to have the internal API service handle any state changes, which would be in sync with the bash scripts if it called those. A separate container proxies the API and can handle any rate limiting, TLS, auth etc. Since preferences may differ, eg for auth some may want OAuth, while others may be happy with mTLS, region locking, how logs/metrics are handled etc.

All of that is a separate concern from the minimal API required for docker-mailserver to expose such functionality without requiring each consumer service to execute scripts or directly manipulate files (maintenance concern).

LukeMarlin commented 3 years ago

Update: I'm currently playing with a Dockerfile that inherit docker-mailserver and trying to setup a working supervisord conf to run the program (barebone for now), then I'll add a basic nginx. Goal here would be to provide a dropin replacement for docker-mailserver to try it out. It will be a testing build, not to be used on a prod server!

Anyway, I'm not very often available at the moment, it should be better mid-august, however I hope to have a build before that :)

polarathene commented 3 years ago

then I'll add a basic nginx.

Might I suggest Caddy?

Although personally I still advocate for a minimal internal API service that we can ship on the official image (if dependencies and size required is minimal, like Rust would enable) with a separate image for a public API and anything like nginx/caddy.

Caddy has a very simple config and can handle features you may want such as automatic TLS provisioning with LetsEncrypt, one liner reverse proxy of a service, smart file type defaults for gzip, easy mTLS, etc.

NorseGaud commented 3 years ago

then I'll add a basic nginx.

Might I suggest Caddy?

Although personally I still advocate for a minimal internal API service that we can ship on the official image (if dependencies and size required is minimal, like Rust would enable) with a separate image for a public API and anything like nginx/caddy.

Caddy has a very simple config and can handle features you may want such as automatic TLS provisioning with LetsEncrypt, one liner reverse proxy of a service, smart file type defaults for gzip, easy mTLS, etc.

I get the feeling that we'll have to open a large discussion about that once we prove this works building on top. It seems like a lot of the team is split on this right now (maybe just because they haven't seen it yet). 🤞🏼

LukeMarlin commented 3 years ago

Might I suggest Caddy?

Will check, I said nginx but in reality I wanted to first check if apache was present in the image to avoid installing stuff. In any case swapping the proxy could be done at any time anyway!

polarathene commented 3 years ago

If you'd like to give Caddy a go, let me know if you'd like any help with it's config.

When using within Docker I believe you'd want to listen to 0.0.0.0 to properly respond to any external requests. And if we're not using an internal API with a separate public API container, to support LetsEncrypt or similar, with a public hostname (eg dms-api.example.com), you'd probably want to use an ENV var for that too unless the user is expected to modify the config.

Since Caddy handles the TLS provisioning, it'd need to have access to perform a HTTP port 80 challenge or Wildcard DNS challenge (requires custom caddy builds with DNS plugins IIRC), it is possible to use an existing TLS cert too, but they likewise need to be assigned the SAN that the API responds to. I imagine you might run into similar concerns with nginx or apache as well, but I'm still adamant about Caddy being nicer to work with config wise.

Again, if we had a separate internal API, most of those concerns can be delegated to a sidecar container, which proxies the internal API and perhaps provides a frontend web client for admin or whatever else you like. I assume some users would prefer to have nginx-proxy or traefik handle the frontend + API domains and TLS certs, which can be another use case to keep in mind.

Or perhaps I've misunderstood the approach being taken?

LukeMarlin commented 3 years ago

I've misunderstood the approach being taken?

So far my intention was to have only one API. It should be a very simple one since it will mostly call setup.sh. I don't see a need for splitting that in two APIs, especially a separate docker just for that! I'm experimenting to make a secondary docker image (core vs with-api) and that's it. Now, whether or not it ends up that way, it wouldn't change the code itself and most of what's done could be reused so I guess I'm going to continue what I started unless a clear consensus appears.

polarathene commented 3 years ago

So far my intention was to have only one API. It should be a very simple one since it will mostly call setup.sh. I don't see a need for splitting that in two APIs

I recall a discussion about the API being reachable from a frontend web client, presumably a REST API.

There was talk of using an API key in the header request for auth, rate limiting and handling HTTPS. None of these things are required for the main API and are better delegated to a separate service imo that proxies the API to the public web if desired.

Is that no longer the case? How is the API being exposed or interacted with? Is the frontend web admin separate from the API project?

LukeMarlin commented 3 years ago

Maybe I don't understand what's your idea for an internal API? No https, no security, no rate limiting, so I suppose exposed only locally? What's its purpose?

Is the frontend web admin separate

As far as I'm concerned, yes. It could still be an option in the forked dockerfile, but for sure I don't think it should be embedded and mandatory along the API

polarathene commented 3 years ago

What's its purpose?

An API where none of the other features are relevant to it's functionality?

It's quite common for services with Docker to only publish port 80 and defer HTTPS to a reverse proxy where a lot of those concerns are handled. Especially since the requirements and stack can vary for an environment.

I don't think it should be embedded and mandatory along the API

Ok great :+1:


I'm happy to just wait and see once it's ready, my only concern was about how flexible the setup would be for different setups.

We seem to be on the same page, I was just considering the API and security as separate boundaries (it's important to have it, but ideally can be delegated to existing infrastructure that focuses on that).

LukeMarlin commented 3 years ago
  • Is the intention to have the API receive requests from the web client directly?

Current draft API is intended to be used directly, regardless of client. Could be a small cli, could be curl, could be a web panel or as suggested by someone else, a plugin in some webmail.

  • Would that be on different subdomain?

Didn't look into that. Ideally this should be configurable, for PoC it could be a chosen subdomain

  • How were you envisioning deployment/configuration of that with HTTPS and related features for the API?

Didn't think much of it yet either, it seems that caddy can take care of HTTPS which is nice. Other than that, I'd expect that a couple env values (token, domain) and a different docker image would suffice. Might be wrong though, I'll know soon enough when I reach that point. Technically I'm somewhat ready to test on my own setup.

As I said above, I'm busy and will be able to provide more after mid-august hopefully!

andrewlow commented 3 years ago

Could someone summarize - or point at - the decision that's been made so far for building out the container that will run this?

I would like to add to the discussion the idea of setting the API up to run by default on HTTPS. Three modes we should consider

  1. You already have a web facing webserver with letsencrypt and will proxy to this over HTTP
  2. You need to use whatever webserver the project provides and will integrate with letsencrypt
  3. You will run it with a self signed certificate

We could simplify and ignore 1/2 by just providing instructions to people on how to pull down and configure the linuxserver.io swag image https://docs.linuxserver.io/general/swag - which will get you an nginx with easy let's encrypt integration.

For more details on 3 - I suggest you look at the latest openwrt which defaults to https but using a self signed certificate - they even have details on 'trusting' that cert https://openwrt.org/docs/guide-user/luci/getting_rid_of_luci_https_certificate_warnings

As we are passing tokens / passwords around in the API we should make it secure by default.

LukeMarlin commented 3 years ago

Option 2 is the current one: https://caddyserver.com/ Caddy sets up https with letsencrypt by itself and defaults to it. This is the simplest solution, albeit not the most flexible. There should be no issue enabling users to provide their own frontend and certificates later on, however it's additional effort :)

andrewlow commented 3 years ago

I personally won't be able to use it without additional effort - but I'm up for that. I guess it's just a matter of waiting for some code to get shared before I can dive in.