Michal Zalecki
Michal Zalecki
software development, testing, JavaScript,
TypeScript, Node.js, React, and other stuff

Docker Compose for Node.js and PostgreSQL

Docker is the response to an ongoing problem of differences between environments in which application runs. Whether those differences are across machines of the development team, continuous integration server, or production environment. Since you are reading this, I assume you are already more or less familiar with benefits of containerizing applications. Let's go straight to Node.js specific bits.

There is a set of challenges when it comes to dockerizing Node.js applications, especially if you want to use Docker for development as well. I hope this guide will save you a few headaches.

TL;DR: You can find the code for a running example on GitHub.

I've published another article on using Docker for Node.js development that explains more about optimizations and isolating host's node modules from the container.

Dockerfile

As a base image, I am using node image that runs under Alpine Linux, a lightweight Linux distribution. I want to expose two ports. EXPOSE is not publishing any ports, it is just a form of a documentation. It is possible to specify ports with Docker Compose later. Port 3000 is the port we use to run our web server, and 9229 is a default port for Node.js Inspector. After we coping package.json and package-lock.json we install dependencies. Coping the rest of files happens later to maximize benefits of docker caching intermediate containers.

FROM node:10.15.0-alpine
EXPOSE 3000 9229

WORKDIR /home/app

COPY package.json /home/app/
COPY package-lock.json /home/app/

RUN npm ci

COPY . /home/app

RUN npm run build

CMD ./scripts/start.sh

The executable for the container could be an npm start script, but I prefer to use a shell script instead. It makes it easier to implement more complex steps which might require executing a different command to start the application in a development or production mode. Moreover, it allows for running additional scripts like migrations for your database.

#!/bin/sh

if [ "$NODE_ENV" == "production" ] ; then
  npm run start
else
  npm run dev
fi

Docker Compose

I am splitting my Docker Compose configuration into two files. One is a bare minimum to run the application in production or on the continuous integration server. Namely, no volumes mounting and .env files. The second one is a development-specific configuration.

# docker-compose.yml
version: "3"
services:
  app:
    build: .
    depends_on:
      - postgres
    ports:
      - "3000:3000"
      - "9229:9229"

  postgres:
    image: postgres:11.2-alpine
    environment:
      POSTGRES_PASSWORD: postgres

During development, I am interested in sharing code between the container and the host file system, but this should not apply to node_modules. Some packages (e.g., argon2) require additional components that need a compilation step. Package compiled on your machine and copied to the container is unlikely to work. That is why you would like to mount extra volume just for node modules.

The other addition to the development configuration of docker compose is using the .env file. It is a convenient way to manage environment variables on your local machine. That said, you should not keep it in the repository. In production, use environment variables instead.

For more information on how to configure Postgres container go to Docker Hub.

# docker-compose.override.yml
version: "3"
services:
  app:
    env_file: .env
    volumes:
      - .:/home/app/
      - /home/app/node_modules

Docker Compose reads the override file by default unless said otherwise. If you are using Docker Compose on CI then explicitly specify all configuration files that apply.

docker-compose -f docker-compose.yml -f docker-compose.ci.yml up

Npm Scripts and Node Inspector

Npm scripts are specific to your project, but for the reference, those are mine.

{
  ...
  "scripts": {
    "dev": "concurrently -k \"npm run build:watch\" \"npm run start:dev\"",
    "start": "node dist/index.js",
    "start:dev": "nodemon --inspect=0.0.0.0:9229 dist/index.js",
    "build": "tsc",
    "build:watch": "tsc -w"
  }
}

I do not call npm scripts directly from the command line. They are a convenient place to encapsulate complexity and simplify start.sh (and later the other scripts).

The important takeaway is that inspector should be bound to host 0.0.0.0 which is a public IP of the container instead of the default localhost. Otherwise, you are not able to access it from your local machine.

.dockerignore

There is a bunch of stuff you can list here and which are not needed to run the application in the container. Instead of trying to list all of them, I distinguish three.

.git
node_modules
dist

Ignore node_modules for reasons I have already explained when covering volumes. dist is just the output directory of our build pipeline. You might not have its counterpart in your project when you write in JavaScript/CommonJS and do not need a build step. These are all simple things, but you would better not miss them.

Conclusion

You may not like this approach, and it is ok (tell me why in the comments). For better or worse, there is no single way to do it. Hopefully, this reference gave you a different perspective and helped you to fix this one thing that did not work for you.

I have not touched on deployment and running in production. There are a few ways you can approach it. Some just run the application container in Docker and install the database directly on the host which makes it harder to lose your data by accident. You could build and push an image to the registry or push to Dokku if you do not feel like using an image repository. Deployment on its own is a topic for another article.

Photo by Abigail Lynn on Unsplash.