Open Nathan13888 opened 1 year ago
Interesting. I know that I would prefer to be able to have that option of running MongoDB externally. I haven't seen much in the way of references to the settings that are available through the properties file but the ones that they are using,mongo.external
and eap.mongod.uri
, gave me some hints of what to search for. If I support an external MongoDB, I want to really support and external MongoDB with username and password for those who either won't or do not want to put their controller on an isolated network segment since the default controller has no authentication. I am one of those people as I run my controller with macvlan and while I could add another docker network that's a private bridge, it just add some complication that isn't really necessary.
I did a little bit of digging this morning into the code and after doing some decompiling, I've found a few properties that may be interesting/related to MongoDB for this use case:
mongo.connections_per_host
mongo.threads_multiplier
mongo.external
mongo.external.host
mongo.external.port
mongo.external.username
mongo.external.password
mongo.external.ssl
mongo.external.ssl.invalid_hostname_allowed
mongo.external.ssl.custom
mongo.external.ssl.key_store
mongo.external.ssl.key_store_pass
eap.mongod.args
eap.mongod.uri
eap.mongod.port
eap.mongod.db
eap.mongod.repair.command
linux.mongod.nojournal
eap.mongod.log.size.limit
eap.mongod.log.rolling.size
eap.mongod.pid.path
And for a more official approach, I did submit a request on the TP-Link forums to see if they could provide documentation around the properties to setup an external MongoDB.
I will see if I can get anything working with the mongo.external.*
properties.
I have it working without auth right now in a POC type setup. Need to figure out why it's not working when passing a URI with a username and password in it. Not sure if it is an issue on the MongoDB container side or on the controller side.
Just an update on testing user/password based auth - it's like it partially works in that it will seed the collection with documents but for some reason, the controller process will die and restart itself.
Ah user error on my part. I didn't create & grant access to the omada_data
collection. It does work with username and password as a part of the eap.mongod.uri
property.
I was just running a really simple MongoDB container for testing:
docker run -d \
--name mongodb \
-p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME="omada" \
-e MONGO_INITDB_ROOT_PASSWORD="0m4d4" \
-e MONGO_INITDB_DATABASE="omada" \
-v omada-mongo:/data/db \
-v "${HOME}/temp/mongo:/docker-entrypoint-initdb.d" \
mongo:7
I am not sure if there is a way to create a user with the necessary permissions to basically be a full admin over the entire mongodb so I don't have to worry about creating collections with individual permissions and I am not really sure if the controller is setup to grant itself permissions, if needed even if there was a way to do that.
But this is what works for initializing the MongoDB:
$ cat omada.js
db.createUser(
{
user: "omada",
pwd: "0m4d4",
roles: [
{
role: "readWrite",
db: "omada"
},
{
role: "readWrite",
db: "omada_data"
}
]
}
);
So then I could run the controller with the two new env vars set to:
MONGO_EXTERNAL="true"
EAP_MONGOD_URI="mongodb://omada:0m4d4@192.168.0.150:27017/omada"
Full command:
docker run -d \
--name omada-controller \
--restart unless-stopped \
--ulimit nofile=4096:8192 \
-p 8088:8088 \
-p 8043:8043 \
-p 8843:8843 \
-p 27001:27001/udp \
-p 29810:29810/udp \
-p 29811-29816:29811-29816 \
-e MANAGE_HTTP_PORT=8088 \
-e MANAGE_HTTPS_PORT=8043 \
-e PGID="508" \
-e PORTAL_HTTP_PORT=8088 \
-e PORTAL_HTTPS_PORT=8843 \
-e PORT_ADOPT_V1=29812 \
-e PORT_APP_DISCOVERY=27001 \
-e PORT_DISCOVERY=29810 \
-e PORT_MANAGER_V1=29811 \
-e PORT_MANAGER_V2=29814 \
-e PORT_TRANSFER_V2=29815 \
-e PORT_RTTY=29816 \
-e PORT_UPGRADE_V1=29813 \
-e PUID="508" \
-e SHOW_SERVER_LOGS=true \
-e SHOW_MONGODB_LOGS=false \
-e SSL_CERT_NAME="tls.crt" \
-e SSL_KEY_NAME="tls.key" \
-e TZ=Etc/UTC \
-v omada-data:/opt/tplink/EAPController/data \
-v omada-logs:/opt/tplink/EAPController/logs \
-e MONGO_EXTERNAL="true" \
-e EAP_MONGOD_URI="mongodb://omada:0m4d4@192.168.0.150:27017/omada" \
mbentley/omada-controller:5.12-test
I am going to push a branch with what I have so far here in a bit - will add a link to that here.
Thanks a lot for sharing all your insights @mbentley! You have a lot of valid points. I suppose one day if such a feature is well and truly supported, users could migrate it in any way they please.
Also, for the record, I've tried this image previously on Kubernetes and haven't had an issue previously. I don't recall seeing any errors once MongoDB was running (using the mongodb community operator).
For the image I shared, they also have a Helm chart (which is what I used). https://github.com/damoun/helm-charts/tree/main/charts/omada-controller
RE: MongoDB Admin. I've previously configured this with the MongoDB Community Operator (https://github.com/mongodb/helm-charts/tree/main/charts/community-operator):
users:
- name: cc
db: admin
passwordSecretRef: # a reference to the secret that will be used to generate the user's password
name: cc-secret
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
scramCredentialsSecretName: scram-secret
- name: omada
db: omada
passwordSecretRef:
name: omada-secret
scramCredentialsSecretName: scram-secret-omada
roles:
- name: dbOwner # includes readWrite, dbAdmin, userAdmin
db: omada
- name: readWrite
db: omada
- name: dbOwner
db: omada_data
- name: readWrite
db: omada_data
Not sure if this is helpful in replicating something similar in Docker, but my config was pretty much entirely defined as part of the operator's replica-set config.
That does help a bit, thanks. Just glancing at it quickly, I should be able to translate that to an init js script.
I put my WIP in this branch: external-mongo.
There are some notes about how to try it out.
Some things that need to be done/investigated to make this better:
*Edit: Moved list of possible to do items to the first post for visibility
Thanks a lot for all the investigations @mbentley! I'll try helping with some investigation later on. However I'm a bit busy in the coming two weeks.
So if anybody else sees this 👀 , they should definitely consider helping out too >:)
I built amd64
and arm64
images for testing and updated the notes with instructions that should work a bit easier.
I just merged in https://github.com/mbentley/docker-omada-controller/pull/364 as "experimental". I haven't updated the image build pipelines to build the non-mongodb images but the ones that will be built by the normal CI will work fine - they'll just still have the MongoDB binaries in them. There is still further work to do but I didn't want to not merge the changes as the branch was starting to get a bit out of date and the work done didn't impact any functionality in the standard setup.
Seems good! I'll give this "experimental" image a shot later next week.
For building this image, would I just build master
(which has the merged changes) as usual using these instructions? https://github.com/mbentley/docker-omada-controller#building-images
Yup, that works. Here are the latest notes I have: https://github.com/mbentley/docker-omada-controller/tree/master/external_mongodb#common-steps
Sorry, the links I had were broken previously since I deleted the old branch.
@mbentley Thank you for your work on this feature and this project in general! I'll be trying out the process this weekend with a view to move to MongoDB 7. Also happy to help with any specific testing or other work to ease your load. I am on kubernetes but can spin up a test host with docker for any docker specific work.
I've spent most of the day setting this up and testing. It works well, thank you! My 'production' instance is now running using with your latest stable image and MongoDB 7.0.3.
It was mostly uneventful other than a few hiccups with the controller export/import migration method I used. Was unable to import any history - import would fail completely if any 'Retained Data Backup' was selected on export. Also found that nothing would adopt if 'Retain User Info' is not selected on export. As far as I can tell, only client names do not survive the migration, but they are easy enough to re-apply over API once the clients are known to the controller again.
A little security enhacement :)
https://github.com/mbentley/docker-omada-controller/pull/415
Hello again @mbentley :)
I am using thus amazing project in my Kubernetes cluster. ATM i have succeeded using it. As a reference for others, some links:
But I have identified two things/potential improvements:
db
dir, I can see a lot of newly created Mongodb files?I have to check this deeper, but post it here just in case is a known-issue on your side
latest-nomongo
and 5.13-nomongo
or similar. WDYT? 😊
mongod
binary is present to see if the behavior you're describing happens.I just built new images for 5.13
without MongoDB which you can see on Docker Hub but they're tagged
The multi-arch tag is 5.13-external-mongo-test
or are specific architecture tags: 5.13-external-mongo-test-amd64
, 5.13-external-mongo-test-arm64
, and 5.13-external-mongo-test-armv7l
. I have done no testing on these at this point so hopefully they work.
I would like to know if external mongo is not found, you know, auth failure or whatever, the local one is started? I assume it shouldn't, but asking as I can not see the command that is launching mongo not in the entrypoint nor anywhere :)
I had a list of things that I wanted to try to have done before saying the feature is no longer experimental but it was buried in the comments so I moved that to the first post. I just don't want to say that the feature is ready to use and then realize that I need to have some sort of breaking change or need to already have some sort of tech debt to work around. Things like what you mention where it seems like the internal MongoDB may have been started at some point when using the image with MongoDB built in are things that need to be validated and addressed. If that means preventing someone from using the image with MongoDB installed from starting if they're using the external MongoDB, then that's something that I might need to do.
It's just tricky because I only have the built application artifacts so I can only really see what TP-Link exposes options wise or I can try to decipher what's going on in the decompiled classes which is not exactly ideal and of course there is nothing to say that they won't break the ability to use an external MongoDB since it's not like it is a documented deployment method by them so I can exactly expect that they'll never break it on their end.
Hey, I can debug this deploying everything again in another cluster and help debugging deeper :)
Regarding to potential technical debt, I think it's enough for me to really be sure that internal Mongo is not being launched when external is used. I will start with this, right? The difference is only about disk space, and no more resources at cpu/memory level, so I think this can be enough for almost everyone
WDYT?
Yes, that would be a great help to see if it is possible to figure out if/when it's still starting mongod
when using one of the normal images. Long term, it wouldn't be that big of a deal to just have a single image as it is just about the space at that point:
I just updated the external mongodb readme with additional build instructions and also included an armv7l image for the versions without MongoDB since I realized that would be one way to actually be able to run the controller on an armv7l machine in a way that doesn't require you to run an old, ancient Ubuntu version - if you have another that can run MongoDB. Then again, if you can run MongoDB on another machine, not sure why you would actually want to run the controller on a really poorly performing device like an older Raspberry Pi but whatever.
So I just started up a controller where I am using the all in one image and I don't see where the MongoDB starts up in the AIO container, at least not after doing a basic initial setup:
# start mongodb
docker run -d \
--name mongodb \
--network omada \
-p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME="admin" \
-e MONGO_INITDB_ROOT_PASSWORD="password" \
-e MONGO_INITDB_DATABASE="omada" \
--mount type=volume,source=omada-mongo-config,destination=/data/configdb \
--mount type=volume,source=omada-mongo-data,destination=/data/db \
--mount type=bind,source="${PWD}/external_mongodb",destination=/docker-entrypoint-initdb.d \
mongo:4
# create network
docker network create -d bridge omada
# run omada controller image w/mongod
docker run -d \
--name omada-controller \
--ulimit nofile=4096:8192 \
--network omada \
-p 8088:8088 \
-p 8043:8043 \
-p 8843:8843 \
-p 27001:27001/udp \
-p 29810:29810/udp \
-p 29811-29816:29811-29816 \
--mount type=volume,source=omada-data,destination=/opt/tplink/EAPController/data \
--mount type=volume,source=omada-logs,destination=/opt/tplink/EAPController/logs \
-e MONGO_EXTERNAL="true" \
-e EAP_MONGOD_URI="mongodb://omada:0m4d4@mongodb.omada:27017/omada" \
mbentley/omada-controller:5.13 &&\
docker logs -f omada-controller
# check the db directory
$ docker exec -it omada-controller ls -la /opt/tplink/EAPController/data/db
total 8
drwxr-xr-x 2 omada omada 4096 Mar 19 11:02 .
drwxr-xr-x 6 omada omada 4096 Mar 20 09:56 ..
I should add that I also stopped the controller container and restarted it - still no MongoDB running. I then stopped both the controller and then MongoDB and then I started the controller without MongoDB running and it still doesn't start MongoDB. So if you can try reproduce this on your end @achetronic, that would be really helpful. I'm just running it directly in Docker as you can tell from the docker run
commands.
Hey :) I will test this these days to give proper feedback
Regarding to document how to build images without mongo, I also think this can be useful for others, as I am running this with latest version of mongo without any issue. Now, I have x64 VMs on my hypervisor, but I had some problems in the past with arm64 machines
Hey @mbentley,
I have been waiting for some days to be completely sure. I can confirm all the files inside internal MongoDB directory are not updated and only those on External MongoDB are new, which makes a lot of sense, but i needed to check 😊
Moreover, if you launch Omada Controller with a bad mongodb connection string, it seems it's not launching internal Mongo, so I assume everything was there due to my previous tests
Would you like me to open a PR with some extra docs in the future related to how to deploy this successfully on Kuberbetes and showing some manifests? or may be you prefer to keep it orchestration agnostic and give examples only for Docker? I can not do it just now but may be in some weeks
That would be great if you have the time to do a PR for how you're deploying it on k8s. It'd be great to put any artifacts under /external_mongodb assuming it would just be for the external MongoDB but if you have anything for the all in one deployment, you can add that in to the root of the project, similar to the compose file. If there are a lot of manifests, keeping them together in another subdirectory would be ideal unless you're putting them all in a single file.
I have a few other similar images like this time machine image where someone contributed a k3s manifest so as long as people find it helpful for reference, that's great. The main thing I like to try to do is have the examples leave everything as default as possible unless it's required specifically for it to function.
What problem are you looking to solve?
An externally-hosted MongoDB instance.
For the sake of convenience, an external MongoDB setup would enable greater deployment options and optimizations.
Describe the solution that you have in mind
This docker image appears to use an external MongoDB instance (https://github.com/damoun/docker-omada-controller) https://github.com/damoun/docker-omada-controller/blob/main/docker-compose.yaml
An environment variable could be used to determine whether a local MongoDB instance should be used, or something external. Perhaps this variable should only affect the first startup.
Additional Context
No response
To Do
(Added by @mbentley)
Some things that need to be done/investigated to make this better:
install.sh
as done in https://github.com/mbentley/docker-omada-controller/pull/364/commits/bc9caa6afbe40fde7197e506a772ae0fb1942103#diff-043df5bdbf6639d7a77e1d44c5226fd7371e5259a1e4df3a0dd5d64c30dca44f. Images without MongoDB in them are not yet being built automatically.mongod
but telling the app to use an external connection, will the controller still start MongoDB? (see https://github.com/mbentley/docker-omada-controller/issues/356#issuecomment-2002386534)I am sure there are other things that could also be useful that will come up.