LearningLocker / learninglocker

Learning Locker - The Open Source Learning Record Store. Started in 2014.
https://learningpool.com/solutions/learning-record-store-learning-locker/learning-locker-community-overview/
GNU General Public License v3.0
553 stars 275 forks source link

Add docker file(s) to containerize the other services #1281

Open caperneoignis opened 5 years ago

caperneoignis commented 5 years ago

This is more of a feature request. I see that the xapi-service is containerized which will allow API calls to take place without the need for UI, worker, or API to also be installed. But if someone wanted to containerize the other 3 elements for containerized clustering allow for individual services to be increased or decreased depending on their use, that will allow for greater flexibility. We can add docker files to the individual services that can be built using a build script during build jobs to push the individual resulting containers to docker much in the same way as XAPI service does. With that individual users can create their own docker compose files or kubernetes helm scripts to deploy the services as they see fit. This also may help users who already have Mongo and redis servers ready to accept connections and don't want to install unneeded services that the deploy script implements.

Just wondering what your thoughts are on this, and if you'd be willing to except a pull request that did the build and push to docker?

caperneoignis commented 5 years ago

The one repo I saw with docker compose, was running the deploy script in the container, resulting in all the services being deployed in the container, which is less then ideal. If the three services have to be build together and deployed together then make a docker container with those three services and the users can make a docker compose file for nginx, XAPI, and the three service group containers. Which will still scratch the itch without having to create containers with unneeded services other then the ones listed and make them container cluster ready.

ryasmi commented 5 years ago

Hi @caperneoignis, this is something we'd really like to do and would certainly accept a PR for it.

caperneoignis commented 5 years ago

@ryansmith94 I'll try and give it a shot, but may not be until after the new year.

caperneoignis commented 5 years ago

https://github.com/caperneoignis/learninglocker/tree/docker-file-addition

Working on the container right now. They seem to be stable, just need to get past the initial install piece which I'm trying to wrap my head around. Any plans on adding a UI piece for installing the first time, if mongo is missing the collection? Call the variable initial_install or something similar and have the UI redirect to the install page? Since the .env will have the proper connections on container startup, it would be possible to install the initial collections based on this, once setup the install page will just redirect if someone tries to hit it because the collection exist.

Otherwise, I have to figure out a scripted way of doing this but only doing it if the collection does exist, so we don't get credential overwrite every time the containers are reinitialized. We can do a mongo migration every time the containers startup unless this will cause irregularities in the DB.

ryasmi commented 5 years ago

Thanks @caperneoignis, I've asked @ht2 and @Ian247 for some input on this.

caperneoignis commented 5 years ago
ui_1     | 2018-12-18 21:51:07:300 - error:  
ui_1     | { message: 'E11000 duplicate key error collection: learninglocker_v2.siteSettings index: _id_ dup key: { : ObjectId(\'111111111111111111111111\') }',
ui_1     |   stack: 'MongoError: E11000 duplicate key error collection: learninglocker_v2.siteSettings index: _id_ dup key: { : ObjectId(\'111111111111111111111111\') }\n    at Function.create (/usr/local/learninglocker/current/node_modules/mongodb-core/lib/error.js:43:12)\n    at toError (/usr/local/learninglocker/current/node_modules/mongoose/node_modules/mongodb/lib/utils.js:149:22)\n    at coll.s.topology.insert (/usr/local/learninglocker/current/node_modules/mongoose/node_modules/mongodb/lib/operations/collection_ops.js:827:39)\n    at /usr/local/learninglocker/current/node_modules/mongodb-core/lib/connection/pool.js:532:18\n    at _combinedTickCallback (internal/process/next_tick.js:132:7)\n    at process._tickDomainCallback (internal/process/next_tick.js:219:9)',
ui_1     |   driver: true,
ui_1     |   name: 'MongoError',
ui_1     |   index: 0,
ui_1     |   code: 11000,
ui_1     |   errmsg: 'E11000 duplicate key error collection: learninglocker_v2.siteSettings index: _id_ dup key: { : ObjectId(\'111111111111111111111111\') }' }
ui_1     | 2018-12-18 21:51:07:304 - info: Looking for ht2testadmin@ht2labs.com
db_1     | 2018-12-18T21:51:07.310+0000 I NETWORK  [thread1] connection accepted from 172.23.0.4:43082 #39 (18 connections now open)
db_1     | 2018-12-18T21:51:07.314+0000 I NETWORK  [thread1] connection accepted from 172.23.0.4:43084 #40 (19 connections now open)
ui_1     | 2018-12-18 21:51:07:317 - info: User not found, creating...
db_1     | 2018-12-18T21:51:07.321+0000 I NETWORK  [thread1] connection accepted from 172.23.0.4:43086 #41 (20 connections now open)
ui_1     | (node:34) DeprecationWarning: collection.count is deprecated, and will be removed in a future version. Use collection.countDocuments or collection.estimatedDocumentCount instead
ui_1     | 2018-12-18 21:51:07:366 - error: Error sending email 
ui_1     | { message: 'connect ECONNREFUSED 127.0.0.1:25',
ui_1     |   stack: 'Error: connect ECONNREFUSED 127.0.0.1:25\n    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)',
ui_1     |   errno: 'ECONNREFUSED',
ui_1     |   code: 'ECONNECTION',
ui_1     |   syscall: 'connect',
ui_1     |   address: '127.0.0.1',
ui_1     |   port: 25,
ui_1     |   command: 'CONN' }
ui_1     | 2018-12-18 21:51:07:368 - info: Organisation not found, creating...
ui_1     | (node:34) DeprecationWarning: collection.update is deprecated. Use updateOne, updateMany, or bulkWrite instead.
ui_1     | 2018-12-18 21:51:07:397 - info: Adding user to organisation

Get this when doing the docker-compose and adding the var to do an account creation for the first time. not sure what the duplicate key means, this should be a fresh database.

On that note, I do have the setup for the most part complete once I figure out the account creation piece. The site is working and I can hit it locally using my browser.

https://github.com/caperneoignis/learninglocker/blob/docker-file-addition/docker-compose.yml

there is the docker-compose file if anyone wants to look over my work. The Dockerfile and needed configuration files are also in the docker directory along with a readme. I'll update it with cluster items later, once I get the services setup in a more micro service format.

caperneoignis commented 5 years ago

Correction I figured it out. The issue was I was doing the migration before doing the account creation. I flipped the two around and everything works. I can log in using the docker-compose archutecure. Issue now is preventing race conditions in a clustered network like kubernetes and docker-swarm. Will attempt that soon. I am still getting the error during migration.

2018-12-18 22:06:09:857 - error: Error migrating up 20180411160000_site_settings, Reverting 20180411160000_site_settings
ui_1     | 2018-12-18 22:06:09:857 - info: Starting down migration of 20180411160000_site_settings
db_1     | 2018-12-18T22:06:09.858+0000 I COMMAND  [conn28] CMD: drop learninglocker_v2.siteSettings
ui_1     | (node:35) DeprecationWarning: collection.remove is deprecated. Use deleteOne, deleteMany, or bulkWrite instead.
ui_1     | 2018-12-18 22:06:09:861 - info: Finished down migration of 20180411160000_site_settings
ui_1     | { MongoError: E11000 duplicate key error collection: learninglocker_v2.siteSettings index: _id_ dup key: { : ObjectId('111111111111111111111111') }
ui_1     |     at Function.create (/usr/local/learninglocker/current/node_modules/mongodb-core/lib/error.js:43:12)
ui_1     |     at toError (/usr/local/learninglocker/current/node_modules/mongoose/node_modules/mongodb/lib/utils.js:149:22)
ui_1     |     at coll.s.topology.insert (/usr/local/learninglocker/current/node_modules/mongoose/node_modules/mongodb/lib/operations/collection_ops.js:827:39)
ui_1     |     at /usr/local/learninglocker/current/node_modules/mongodb-core/lib/connection/pool.js:532:18
ui_1     |     at _combinedTickCallback (internal/process/next_tick.js:132:7)
ui_1     |     at process._tickDomainCallback (internal/process/next_tick.js:219:9)
ui_1     |   driver: true,
ui_1     |   name: 'MongoError',
ui_1     |   index: 0,
ui_1     |   code: 11000,
ui_1     |   errmsg: 'E11000 duplicate key error collection: learninglocker_v2.siteSettings index: _id_ dup key: { : ObjectId(\'111111111111111111111111\') }',
caperneoignis commented 5 years ago

Good news, when using a persistent database the user, if he already exist does not throw an error, and the system keeps running even with the command to try and create the user. However, still having the issue with the site_settings piece not sure what's going on there.

lunika commented 5 years ago

Hi,

if you are still interested we created images dedicated to learning locker and xapi-service. The main reason we created a new xapi-service image is to not run it as a root user and we used multi-stage build to have only production needed dependencies.

You can find the github repo containing all the Dockerfiles here: https://github.com/openfun/learninglocker-docker/ This repo also contains the documentation and a docker-compose project example.

The images are in our docker hub organization:

We can, if needed, add more tags, for now we just tag since learning locker v2.6.2 and xapi-service v2.2.15

caperneoignis commented 5 years ago

@lunika why do you have nginx if you are exposing the ports for api and ui? You can setup nginx to accept request through port 80 and then proxy the request to the containers in the cluster using the name of the container. Otherwise, not bad, I'll take a longer look to see if it does what I need, but I still am adding the kubernetes configuration and docker swarm to mine, and I'm at work right now so can't see if it scratches everything I need..

lunika commented 5 years ago

why do you have nginx if you are exposing the ports for api and ui?

expose is here to expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified. So it is necessary to expose them. Only Nginx receives requests, we never request api or ui containers directly.

You can setup nginx to accept request through port 80 and then proxy the request to the containers in the cluster using the name of the container.

This is exactly what we do. If you look carefully the Nginx configuration you will see that we proxy the request to the specific container dependending the request url.

still am adding the kubernetes configuration and docker swarm to mine, and I'm at work right now so can't see if it scratches everything I need..

These images are used in our OpenShift cluster. The configuration is public too. You can see it in our tool arnold dedicated to publish complete application to OKD using ansible: https://github.com/openfun/arnold/tree/master/apps/learninglocker There is no big difference between OKD and K8S objects. Replace routes by Ingress and DeploymentConfig by Deployment and it's almost done.

caperneoignis commented 5 years ago

@lunika you are correct sir, sorry about that, if you don't add port:port to the expose then it does the expose locally and not host based. I would say setting up a backend and front end network to the docker file so when it goes to docker swarm you have separate networks would be helpful to those who run swarm and are trying to separate frontend from backend services.

Any chance you can integrate your items with this repo? I ask, because the main reasoning behind this, was so LL can build containers as part of the pipeline and update the containers with the newest version every time they push. Or if you can add a method to your circle ci to generate from their version file and have a scheduled build to try and keep track of the version changes that would be most helpful. But since it is your repo, it is your decision. Just trying to get docker images as up to date as possible for our own containers and future cluster we are setting up for our project.

But nice work on your setup and thank you for the information it will help me at a minimum and hopefully many others.