Summary
Making the database run as a container in Docker which can be quickly spun up or destroyed locally, allowing us to quickly develop in isolation.
Problem / Current Functionality
Currently, we have this hybrid approach where some things are containerized in Docker and some parts (like the database) are shared among all developers in the cloud. This means that everything is connecting to this centralized instance (currently some MLab URL put in our env file). As a result, if developer A is doing something in development, it affects what happens to person B in their development environment. That can be counterproductive to development, and doesn't allow for the flexibility of being able to modify any and everything at-will.
For a true Docker experience, every piece of development should live & be isolated in a local container. Currently this is only the case for our frontend and backend.
Let's create a Docker container that holds inside it the Mongo database. This will be defined most likely as a service in the docker-compose.yml file. We'll start by using one of the existing MongoDB images. When a container is created from this Mongo image, it'll automatically start listening on a port (ex. 27017). We want to publish that to the outside world so that it can be accessible outside so we can access it from our host computer.
When this step is set up correctly, one should be able to run docker-compose up and go into an application like MongoDB Compass and successfully connect to mongodb://localhost:27017/.
Next, we'll need to connect this to the application. To do this, we'll make sure our backend depends_on the database container, so that it first will launch the database before the backend starts up and tries to connect (or else it would be possible for the backend to have started before the database and fail to connect since the database hasn't started). We can reference the database by using it's container name, i.e. mongodb://db:27017/ since it's within the same network so it'll look up the container's IP automatically.
Lastly, we want to make sure we're mounting a volume into the container, so that the data will be stored on our host and is persistent across sessions. This way, when you run docker-compose down, the database data is still stored safely on the host even though the container went down. Next time docker-compose up is run, it'll mount that directory containing the data into the new container, and then that new container will read & write to that data.
We will be able to destroy the volume using docker-compose down -v.
Acceptance Criteria / User Stories
[x] As a platform developer, I should be able to run docker-compose up and have it spin up a database which is used by the platform for all Mongo requests.
[ ] As a platform developer, there's clear documentation on how to spin up & down this database (or an explanation on how it works to someone not familiar with Docker.
Summary Making the database run as a container in Docker which can be quickly spun up or destroyed locally, allowing us to quickly develop in isolation.
Problem / Current Functionality
Currently, we have this hybrid approach where some things are containerized in Docker and some parts (like the database) are shared among all developers in the cloud. This means that everything is connecting to this centralized instance (currently some MLab URL put in our
env
file). As a result, if developer A is doing something in development, it affects what happens to person B in their development environment. That can be counterproductive to development, and doesn't allow for the flexibility of being able to modify any and everything at-will.For a true Docker experience, every piece of development should live & be isolated in a local container. Currently this is only the case for our frontend and backend.
Solution
LAH Docker Compose file can be a reference point for this.
Let's create a Docker container that holds inside it the Mongo database. This will be defined most likely as a service in the
docker-compose.yml
file. We'll start by using one of the existing MongoDB images. When a container is created from this Mongo image, it'll automatically start listening on a port (ex.27017
). We want to publish that to the outside world so that it can be accessible outside so we can access it from our host computer. When this step is set up correctly, one should be able to rundocker-compose up
and go into an application like MongoDB Compass and successfully connect tomongodb://localhost:27017/
.Next, we'll need to connect this to the application. To do this, we'll make sure our backend
depends_on
the database container, so that it first will launch the database before the backend starts up and tries to connect (or else it would be possible for the backend to have started before the database and fail to connect since the database hasn't started). We can reference the database by using it's container name, i.e.mongodb://db:27017/
since it's within the same network so it'll look up the container's IP automatically.Lastly, we want to make sure we're mounting a volume into the container, so that the data will be stored on our host and is persistent across sessions. This way, when you run
docker-compose down
, the database data is still stored safely on the host even though the container went down. Next timedocker-compose up
is run, it'll mount that directory containing the data into the new container, and then that new container will read & write to that data. We will be able to destroy the volume usingdocker-compose down -v
.Acceptance Criteria / User Stories
docker-compose up
and have it spin up a database which is used by the platform for all Mongo requests.