Docker has been around for a few years, but is becoming an expected part of the development process. Serverless Platform-As-A-Service offerings, such as Google Cloud Run, are becoming increasingly docker-centric. Similarly, most CI/CD pipelines, such as Github Actions and Google Cloud Build, are becomming increasingly docker based. In this article, we take a step working with these platforms by creating a docker image of a Node.js app suitable for deploying and running in a CI/CD pipline later.
Docker builds images based on a Dockerfile. Typically this will be named Dockerfile and be in the root of your project but I am naming my file build.dev.Dockerfile and placing it in my ./ci directory.
That said, we will want to add a Docker ignore file to prevent Docker from trying to copy the .git directory and node_modules, etc in while building.Annoyingly, you must have a file named .dockerignore in the root of your project to properly ignore files. In ./.dockerignore, add the following:
########################################################################
# Docker Ignore file #
# #
# Anything not defined here will be copied to build the image #
# Note: This must be in project root & named .dockerignore exactly #
########################################################################
# Node Things
node_modules
npm-debug.log
# Git Things
.git
.gitignore
# Node Development things
README.md
.eslintignore
.eslintrc
# Next.js Things
.next
build
# Build Things
ci
Now that we have an ignore file, lets make the Dockerfile. Add the following contents to your newly created ./ci/build.dev.Dockerfile:
# Base the Image off NODE 10
FROM node:10
# Create app directory in container
WORKDIR /app
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# The Default Port for the application
EXPOSE 3000
ENTRYPOINT [ "node", "server.js" ]
Now that we have a Dockerfile giving instructions on how to build our Docker image, lets actually build it.
In the project root (the same folder as your .dockerignore), run:
docker build -t gae-node-next-demo -f ./ci/build.dev.Dockerfile .
This will run the Dockerfile and build a container with the name gae-node-next-demo
Once complete, you can see your newly created Docker image in your local registry, by running the following command:
docker images
This should return something like:
REPOSITORY TAG IMAGE ID CREATED SIZE
gae-node-next-demo latest 2dd828045dee 5 seconds ago 985MB
Notes:
Next we will run the built image. The critical part in this step is dealing with Docker's internal networking. Each image has its own network and we exposed port 3000 internally in the Dockerfile. However, this is merely the internal network port. When we run our application, we need to map an external port to the internal Docker port.
We do this with the following command:
docker run -p 3000:3000 gae-node-next-demo
This will map the external port 3000 to Docker's internal port 3000 and run the image gae-node-next-demo we created in the previous step.
If you are using my demo application, you should see something like this when you run the command,
[ wait ] compiling ...
> Ready on http://localhost:3000 NODE_ENV: undefined
[ ready ] compiled successfully
[ event ] build page: /
[ wait ] compiling ...
[ ready ] compiled successfully
If you open your browser to localhost:3000, you should see the application running. Celebrate.
Notes:
You cannot kill the container using the standard CTRL-C signal. The signal gets intercepted by the container and does nothing. Additionally, if you ran with the -d flag, you don't have the process the container is running on at hand.
To kill the container, we need the container id. You can find this out by opening a new console and running:
docker ps
This will generate a list of running containers similar to:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fefef3dcee68 gae-node-next-demo "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:3000->3000/tcp infallible_lehmann
Finally, to kill the container, simply run the following command with your container id:
docker kill fefef3dcee68
Run the docker ps command again and be sure your conainer is no longer running.
As stated in the "Before We Begin", our app is set up to run on port 3000 if the PORT environment variable is not present. This is common with most Node.js setups. However, Docker aside, a lot of Platform as a Service providers send an environment variable PORT you need to run on. Since our app is already setup to handle this for Google App Engine, we want to leverage it. If your application has port 3000 hard coded internally, feel free to gloss over this step, but you should consider the overhead of updating your javscript code simply to change the port your application runs on.
Let's explicitly tell our application which (internal docker) port to run on via the PORT environment variable and then expose this internal port.
The comma separated command syntax we used in our Dockerfile in step 1 is called the exec form and does not create a shell and thus does not allow normal command shell processing. We will switch to the shell form to pass variables to the ENTRYPOINT directive.
Modify the last couple lines of your Dockerfile. Change:
EXPOSE 3000
ENTRYPOINT [ "node", "server.js" ]
to
EXPOSE 8000
ENTRYPOINT PORT=8000 node server.js
Repeat steps 2 and 3 above to build and run your new Docker image. However, change the internally mapped port to 8000 as such:
docker build -t gae-node-next-demo -f ./ci/build.dev.Dockerfile . docker run -p 3000:8000 gae-node-next-demo
You should still be able to access your application via localhost:3000, but your application will internally be running on 8000 now.
UPDATE: As I was preparing to deploy to Cloud Run, which wants you to obey PORT environment variable, I found myself updating my Dockerfile Entrypoint to not use the PORT at all and simply passing it in to test locally. This prevents the need to expose an internal port.
My `npm start` command is `NODE_ENV=production node server.js` so I can replace my ENTRYPOINT directive in the DockerFile and replace it with CMD ["npm", "start"]
Then we can run the container locally via docker run -p 9999:9999 -e PORT=9999 gae-node-next-demo:prod
I'll update this documentation later.
At this point we have taken our existing Node.js app and built a Docker image from it, and can run a container locally and access it via the browser. However, we need to do a few things before this is production ready:
Lets create a production Dockerfile at .ci/build.production.Dockerfile that is optimized specifically for production deployments.
# Base the Image off NODE 10
FROM node:10
# Create app directory
WORKDIR /app
# Install Production dependencies
COPY . .
RUN npm ci --only=production
# Build
RUN npm run build
# Bundle app source
COPY . .
# The Default Port for the application
EXPOSE 8000
ENTRYPOINT PORT=8000 NODE_ENV=production node server.js
Next lets build our production Docker image. We also will introduce a "prod" tag so we produce a distinct image.
docker build -t gae-node-next-demo:prod -f ./ci/build.production.Dockerfile .
Once complete, run `docker images` to see your new conainers:
gae-node-next-demo prod fab561fe3c44 38 seconds ago 973MB
gae-node-next-demo latest 2dd828045dee 5 seconds ago 985MB
Notice the filesize difference!
Finally, lets run image:
docker run -p 8080:8000 gae-node-next-demo:prod
> Ready on http://localhost:8000 NODE_ENV: production
Open your browser to localhost:8080, and see the production build of your app. It should be nice and snappy.
We now have the ability to create a production docker image. Our next steps are to push it to Google Cloud's Container Registry, Deploy it, and Run tests against it.
Stay tuned for the next set of articles around these steps. Check out the diff of changes for this post to my node-next-gae-demo example project.