I recently inherited an existing project that hadn’t been actively worked on for quite some time. When I looked at the instructions for how to set up the project on my Mac, it struck me how much dependent software I’d have to install in my desktop environment, and how much of that is software I don’t normally use or want. For example, I have no use for Postgres when not working on this project, and some of the code relies on Node.js 4.2, and other code relies on Node.js 6+, while I have my own software I use daily that relies on Node.js 7+.
Docker and docker-compose to the rescue!
What is Docker?
Docker is a clever twist on virtualization that allows you to develop “containers” of software that consist of all the libraries, programs, and other dependencies that end up as a neat package. During development, the containers run in a single virtual machine (Linux guest) on your host, and you can run multiple containers concurrently within that virtual machine.
The containers share the virtual machine’s operating system, but are protected from one another so within the container it appears like a dedicated virtual host environment. Since the containers do not need to include the host operating system (Linux) within the image, containers can be quite efficient.
You are not limited to running a single instance of a container. In production, you probably want to replicate your Node.js application in multiple running containers to take advantage of multiple cores or processes on the host, or to simply horizontally scale to support a large number of simultaneous users.
Networking
Docker provides facilities for simplifying communications between containers without having to set up DNS or manually manage /etc/hosts files for each container. Docker containers in development will use a range of private IPs that allow the host and containers to communicate with one another.
You can tell Docker to expose a port for any or all of the containers so they can be reached from your LAN or even the Internet. This should really be used with care as exposing ports on any host accessible to the Internet require security considerations. You almost certainly do not want to expose your Postgres server’s port to the internet, or surely you will be hacked! Docker can automatically manage the /etc/hosts files in your containers so you can reach them all by name from any container.
Volumes
Docker also has a concept of volumes, which are file system mounts within the container. If you start up a Postgres container and store some data in the database, that data exists in the file system within that container. When you shut down or rebuild the container, the data goes away, which is often not what you want. If you mount a directory on the host within the container where Postgres stores its data, you can reuse that mount/directory each time you launch your container, even after rebuilding it. Volumes can also be directory images where you just name the image and refer to it by that name; Docker will then create, persist, and maintain the directory for you.
I would map my source code into the container’s app home directory. This is so I can edit code on the host and the changes appear immediately within the host. A program like nodemon will see the changes so it will restart your target server app within its container as you edit.
Ports and Port Mapping
During development of a Node.js application, I would expose port 5858 for the Node.js debugger, and port 3000 (or whatever port you choose) for the ExpressJS server.
Exposing port 5858 allows you to attach a debugger to the app running within the container.
Stock (Prebuilt) Containers
For development of the aforementioned project, I can have a container dedicated to Postgres, another container dedicated to running the Node.js 4.2 code, and another container dedicated to running the Node.js 6+ application. I can wire up the networking in a way that is only visible within these containers and they can communicate with one another. I can persist directories within the containers.
I don’t have to install anything other than Docker on my host/desktop. I can use pre-built containers for Node.js 4.2, Node.js 6.0, and Postgres. I can even switch between projects with radically different dependencies while both projects are running.
What is docker-compose?
While all of the Docker goodness I’ve described so far is powerful and flexible, it is problematic to use the docker command line tool to orchestrate a multiple container environment. That’s where docker-compose comes in.
You create a docker-compose.yml file that describes all the containers that make up your application, as well as the ports to expose between containers and on the host, the host names, the network, volumes, and the rest of the environment you need.
WIth one command you can bring up all the containers in your environment, all wired up and ready to go. With another command you can bring them all down.
When you’re ready, you can deploy to remote servers, AWS, or other Docker-friendly hosting service with docker-compose. Once deployed, you can scale each of the containers by name.
Putting it together
I created a repository with a simple ToDo web application. The application handles interactions by the user and updates the ToDo items within a MongoDB database. The ToDo item list is rendered from the database; all rendering is done server-side using Express/Node.js. There are two containers in this set up: the WWW server and the database.
Dockerfile
The first step was to write the Dockerfile, which contains the steps for building the container for the WWW server. We will be using a prebuilt container for MongoDB, so we don’t need a Dockerfile for that container. The Dockerfile for the WWW server is fairly simple:
FROM node:8.8 RUN useradd --create-home --shell /bin/false app ADD . /home/app ENV HOME=/home/app RUN cd $HOME && chown -R app:app $HOME USER app WORKDIR $HOME RUN npm install && npm cache clear --force CMD npm start
The gist of what this describes is a container with a /home/app directory with all of the repo’s files in it, and from which the application will be run.
A couple of notes here: the application is run via “npm start”, by default, and it will fail if you do not start it with environment variables to tell it how to connect to a running mongodb server on your machine or LAN. The RUN lines are executed as the container is built. The CMD line is the command that will be executed when the container is started.
A container can be built with the docker command for testing purposes, though we won’t be doing this once we have a working docker-compose.yml file.
$ docker build . -t todo
docker-compose.yml
version: "3.1" services: mongodb: image: mongo volumes: - db:/data/db todo: build: . command: ["npm", "start"] volumes: - .:/home/app - /home/app/node_modules ports: - 8000:8000 - 5858:5858 environment: - LISTEN_ADDRESS=0.0.0.0 - LISTEN_PORT=8000 - NODE_ENV=development volumes: db:
The gist of what this describes is two containers: mongodb and todo. They share a default internal network.
The mongodb container is started from the public/official mongo image from dockerhub. To persist the data in the mongodb database, /data/db within the container is mapped to a docker volume named “db”. Each time you start the container, the database will be populated as when you last stopped it.
The todo container is built using the Docker file – the build: .
line does that. The root of the cloned repo is mapped into the container as /home/app. This allows us to use the editor or IDE of our choice to edit the source code and have it change within the container upon saving any of the files. Ports 8000 and 5858 are exposed to the host, allowing the WWW server to be accessed. You can also attach to the Node.js debugger within the container. A few environment variables are set up that will appear in the shell within the container and passed to the app in process.env.
Running it
A simple command will bring up your application (the -d option daemonizes the application, preventing its output from printing to the console):
$ docker-compose up
or
$ docker-compose up -d
You may now point your browser at http://localhost:8000 to see the app work. The “docker ps” command will show the two containers are running. If you edit and save a source file on your host system, nodemon running in the container will detect the change and restart the application. You can attach to the running Node.js app in the container using your favorite debugger.
Control-C does not necessarily bring down your running containers. To truly shut down your application:
$ docker-compose down
Conclusion
This article should provide you enough information to dockerize your own applications for development. The example in the repository is a minimal working multiple container development environment setup.
I recommend you read about deployment of your app in testing and production environments, microservices, scaling of the individual containers, optimization of containers for size, dockerhub, and the various command line switches and options to the docker and docker-compose commands.
Full Docker documentation is available at https://docs.docker.com/.
Full docker-compose documentation is available at https://docs.docker.com/compose/.
Mike Schwartz
Related Posts
-
9 Ways Agile Developers Can Improve Their Scrum Skills
With more than 10 years of experience in Scrum and Scrum-like teams, I finally decided…
-
9 Ways Agile Developers Can Improve Their Scrum Skills
With more than 10 years of experience in Scrum and Scrum-like teams, I finally decided…