Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker.

Mapping Volumes Or Passing Environment Variables To Containerized Docker Applications

TwitterFacebookRedditLinkedInHacker News

If you’ve ever worked with Docker containers you’ve probably been exposed to them being stateless, meaning when a container is destroyed, all record of it is lost including any files it might have created. Not great if you’re working with say a database, correct? However, let’s look at this from a different angle. Let’s say you are deploying a web application that requires some configuration. Depending on how you’ve developed it, the configuration could be controlled via a file or via environment variables. How do you accommodate this with Docker container deployments when you don’t want these configurations baked into the image?

We’re going to see how to work with volume mapping between container and host machines as well as passing environment variables at container deployment with Docker.

While the focus of this tutorial is going to be Docker and realistically it can be accomplished with just a few commands, I want to put things into context by using a real application example. While not necessary, I believe it will make things easier to understand.

Create a New Node.js Project

Assuming you’ve got Node.js installed, execute the following commands from the Terminal or Command Prompt:

npm init -y
touch app.js
touch config.json

The above commands will create a new package.json file as well as an app.js for our logic and a config.json file for our configuration. If you don’t have the touch command, go ahead and create the files manually.

With the project created, open the config.json file and add the following JSON data:

{
    "username": "STOCK-USERNAME",
    "password": "STOCK-PASSWORD"
}

Our goal is to keep this simple, so we’re going to use a username and password. The next step is to print this information from within our logic file. Open the project’s app.js file and include the following:

const Config = require("./config");

console.log(Config.username);
console.log(Config.password);

If you ran the project by executing node app.js it should print out the information from our configuration file. Before we containerize this very simple application, let’s further set it up to use environment variables or the configuration file when environment variables are not present.

Within the app.js file, include the following:

const Config = require("./config");

console.log(process.env.username || Config.username);
console.log(process.env.password || Config.password);

Now that we’ve got our very simple application in a good state, we can worry about building a Docker image from it.

Build a Docker Image from the Node.js Project

To build a Docker image we’re going to need a Docker file within our project. Create a Dockerfile within the project and include the following:

FROM node:alpine

COPY * /srv/
WORKDIR /srv/

CMD ["node", "app.js"]

When we build our image, we’re saying we want to use the Node.js Alpine Linux image. Then we copy all files from our project into a /srv directory within the container and set it as the working directory when the container starts. Finally, we run the CMD command to start our application at runtime.

To build the image, execute the following:

docker build -t node-project .

In the above command, the image will be called node-project in our list of Docker images. For more information on building custom Docker images, check out my tutorial titled, Build a Custom Docker Image for Your Containerized Web Application.

With the application example stuff out of the way, we can focus on the core content of this tutorial.

Deploy a Docker Container with a Mapped Volume for Configuration

As mentioned previously, there are two ways that we can pass data into our application. We’re first going to explore the idea of mapping volumes between the host and container instances.

Before we deploy our container, let’s alter our project. We already have an image which isn’t going to change, but we’re going to need to use a configuration file other than the default that was copied over. Open the project’s config.json file and include the following:

{
    "username": "MAPPED-USERNAME",
    "password": "MAPPED-PASSWORD"
}

Now if we wish to map a volume at deployment we can run the following:

docker run -v $(pwd)/config.json:/srv/config.json node-project

The above command assumes that your command line has an active directory of the project. If you’re not in your project prior to running the above command, just swap out $(pwd) with the actual path to your config.json file. However, notice it is being mapped to the configuration file that was previously copied.

When the application runs, it should print out information in the host machine file, not the file that was in our image.

Pass Environment Variables to a Deployed Docker Container

Alright, let’s assume that we don’t want to map a volume or maybe we can’t for a particular application and we’d like to use environment variables instead. Remember, our image is already ready to handle environment variables if they exist.

From the command line, execute the following:

docker run -e username=ENV-USER -e password=ENV-PASS node-project

Notice that we’re passing in two environment variables to our container at deployment rather than using the image configuration file or a mapped volume. When the application runs, it should print out what we’ve passed.

Conclusion

Like I said previously, this tutorial was overkill for what we were trying to accomplish. I wanted to demonstrate mapping volumes and setting environment variables, neither of which are very attractive on their own. Building a very basic Node.js application and using it as an example makes it a little more appealing.

Being able to map volumes with the host is great for configurations and data persistence. However, there are often circumstances where environment variables might work better. There isn’t really a wrong way to do things, just whatever makes more sense for your Docker needs.

Nic Raboy

Nic Raboy

Nic Raboy is an advocate of modern web and mobile development technologies. He has experience in C#, JavaScript, Golang and a variety of frameworks such as Angular, NativeScript, and Unity. Nic writes about his development experiences related to making web and mobile development easier to understand.