Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker.

Use NGINX As A Reverse Proxy To Your Containerized Docker Applications

TwitterFacebookRedditLinkedInHacker News

You might have noticed that I’m doing quite a bit of Docker related articles lately. This is because I’ve been exploring it as an option for the future of my personal web applications. As of right now I’m serving several web applications on Digital Ocean under a single Apache instance. As requests come into my server, Apache routes them to the appropriate application via virtual hosts. Each application is a different directory on the virtual private server (VPS). If I were to containerize each application, things would behave a bit differently. I would need to set up a reverse proxy to route each request to a different container on the host.

While Apache can work as a reverse proxy, there are other options that work way better. For example NGINX is known for being an awesome reverse proxy solution. We’re going to see how to create several web application containers and route between them with an NGINX reverse proxy container.

For simplicity we’re going to use two stock Docker images straight from Docker Hub and one custom image, the custom image being our reverse proxy. To see where we’re heading, create a docker-compose.yml file with the following:

version: '2'

services:
    reverseproxy:
        image: reverseproxy
        ports:
            - 8080:8080
            - 8081:8081
        restart: always

    nginx:
        depends_on:
            - reverseproxy
        image: nginx:alpine
        restart: always

    apache:
        depends_on:
            - reverseproxy
        image: httpd:alpine
        restart: always

The reverseproxy service will use an image that we’ll create shortly. The nginx and apache services will use each of their respective images and depend on the reverseproxy service being available.

Only ports in the reverseproxy service are exposed to the host machine. This is actually a good thing because this means that the host won’t be able to communicate to any of the exposed services of our other containers directly. In a production environment, you’ll probably want your reverse proxy to use port 80 and 443, but since I’m doing everything locally without server names, I have to differentiate each of my web services by port. After all, you can’t expect http://localhost:80 to know where to go. Only a server name like example1.com and example2.com can take care of that.

So what does our custom image look like?

The custom image representing our reverse proxy will need a Dockerfile file as well as a custom NGINX configuration file. I’ll call it nginx.conf, but it doesn’t really matter what you call it.

Open the Dockerfile and include the following:

FROM nginx:alpine

COPY nginx.conf /etc/nginx/nginx.conf

This custom image will have a base Alpine Linux image running NGINX. During the build process, our configuration file will be copied into the image.

Open the nginx.conf file or whatever you called it, and include the following:

worker_processes 1;

events { worker_connections 1024; }

http {

    sendfile on;

    upstream docker-nginx {
        server nginx:80;
    }

    upstream docker-apache {
        server apache:80;
    }

    server {
        listen 8080;

        location / {
            proxy_pass         http://docker-nginx;
            proxy_redirect     off;
            proxy_set_header   Host $host;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Host $server_name;
        }
    }

    server {
        listen 8081;

        location / {
            proxy_pass         http://docker-apache;
            proxy_redirect     off;
            proxy_set_header   Host $host;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Host $server_name;
        }
    }

}

The magic happens in the above file.

First of all, notice the upstream declarations. We have two upstreams because we have two web applications. The server inside each of the upstreams represents where to find each of the applications.

upstream docker-nginx {
    server nginx:80;
}

The hostname must match the service name found in the docker-compose.yml file. By default, NGINX and Apache web servers broadcast on port 80, but if you’ve changed it, make sure to update the upstream server port.

After defining the upstream servers we need to tell NGINX how to listen and how to react to requests. Take for example the following:

server {
    listen 8080;

    location / {
        proxy_pass         http://docker-nginx;
        proxy_redirect     off;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;
    }
}

If we try to access the host machine via port 8080, NGINX will act as a reverse proxy and serve whatever is in the proxy_pass definition. In the above scenario we have docker-nginx which is the name of one of our upstream servers. This means the NGINX service will be served.

In production you might have something like this:

listen 80;
server_name example1.com;

Before we can launch our containers, we need to build our reverse proxy image. This can easily be accomplished by executing the following command:

docker build -t reverseproxy ./path/to/directory/with/dockerfile/and/nginx.conf

The docker-compose.yml file expects an image by the name of reverseproxy so that is what we’re building. The Dockerfile and nginx.conf file should exist in the same location.

So let’s test out what we have.

We’re using the docker-compose.yml file, but we don’t truly have to. It is just convenient for this example. Execute the following command via your shell:

docker-compose up -d

When complete, we should have three containers deployed, two of which we cannot access directly. From the web browser, navigate to http://localhost:8080. This will hit the NGINX reverse proxy which will in turn load the NGINX web application. Now try to navigate to http://localhost:8081 in your web browser. The NGINX reverse proxy will be hit and the Apache web application will be loaded.

Not bad right?

Conclusion

You just saw how to deploy several web application containers with Docker and control them with an NGINX reverse proxy. Using a reverse proxy is useful if you want to containerize your applications and still have access to them. Remember you can’t access all of them via port 80 or 443 on the host.

If you’re like me and use Digital Ocean, this strategy is perfect for keeping control of your applications. You wouldn’t be restricted to Digital Ocean, for example, you could also use Linode or anything else.

Nic Raboy

Nic Raboy

Nic Raboy is an advocate of modern web and mobile development technologies. He has experience in C#, JavaScript, Golang and a variety of frameworks such as Angular, NativeScript, and Unity. Nic writes about his development experiences related to making web and mobile development easier to understand.