Docker container setup for running an ad-hoc or local environment

Categories: infrastructure, microservices, testing

In one of the projects that I am working right now; we have a setup where all the applications (front-end and back-end, as we split them). I am not going to get in details on how those applications look like but it suffices to say that all those applications are deployed as Docker containers with a reverse proxy (using NGINX) in front of it, providing a path mapping for the underline docker containers.

When the developer wants to run any of those applications locally or when there is the need to deploy an environment for the QA team to test, we need to create a proxy configuration manually to map either an application instance running on the Docker container or an application running on the corresponding developer machine.

With that in mind I looked for a way to setup a group of Docker images where the following could be achieved:

  • The whole of the product could be brought up using a single docker compose, what simplifies any ad-hoc deployment as well a local deployment for the developer;
  • The developer doesn’t need to bring all the containers up, just what he needs on a given moment;
  • The developer could choose to change between a container or his local environment serving content in a port of his choice.

The main component that allows us to do this shift is the reverse proxy, but as you can see it is necessary that the configuration of the reverse proxy to be set dynamically.

There are many ways to do that, including using more powerful tools like Kubernetes or some other Docker orchestrators; all of them seems overkill for what I want to achieve.

At this post, I will look at how we can make the configuration for the reverse proxy (NGINX) dynamic, on a later post we will go over the setup of NGINX and finally how to give the possibility to the developer to choose between a local or a docker container to run the application.

I started looking at how NGINX perceives the usage of a service discovery together with nginx. After that some further investigation drove me to look at a combination of etcd and confd, Consul is also a common alternative but it seemed too much of a configuration to achieve the same.

Having all that said what we needed was something to jump from the running docker container into an etcd database afterwards, confd would do the trick. After some search I found the registrator project jumps in, it seemed exactly what I needed.

The support of registrator for consul is quite big but the same cannot be said about etcd, as I would like to have the following being possible:

  • Use docker labels to define the path to use;
  • Support for different upstreams, as each service will be running under a different docker container.

The current implementation of their etcd backend support the first point but falls short on the second one, and although the registrator offers possibilities to implement your own backend support, that would meant recompiling the code to add your backend there (or digging arround with go). Before you go any further with anything else I recommend to check the project, and if you don’t need anything more than know the container that is up and it’s port, that is probably the docker image to use.

All this writing is just to justify why I decided to look at a NodeJS library to create a small deployed script that could do it, on npm I found the ideal library for this project node-docker-monitor , which does exactly what I needed: offer a listener for when a docker image is started or stopped.

var monitor = require('node-docker-monitor');

var Etcd = require('node-etcd');
var etcd = new Etcd("http://etcd-ip:2379");

// THOSE ARE GOING TO BE THE SCRIPT INPUT VALUES
var hostName = "docker-host-machine";
var keyPrefix = "containers";

monitor({
onContainerUp: function(container) {
if (container.Ports == null) return;
if (container.Ports.length == 0) return;
if (container.Labels == null) return;
if (container.Labels.length == 0) return;

var endpoint = container.Labels.SERVICE_ENDPOINT;
var serviceName = container.Labels.SERVICE_NAME;
if (endpoint == null) return;
if (serviceName == null) return;

//Add the endpoint
etcd.set(keyPrefix + "/endpoint/" + serviceName, endpoint);

for(i=0; i<container.Ports.length; i++) {
console.log(container.Ports);
if (container.Ports[i].PublicPort != null) {
var publicPort = container.Ports[0].PublicPort;
var endpoint = hostName + ":" + publicPort;
console.log('=====> Container up: ', container.Image + "," + container.Id);
etcd.set(keyPrefix + "/address/" + serviceName + "/" + container.Id, endpoint);
}
}
},

onContainerDown: function(container) {
//To be implemented
}
});

The previous code is not executable ready, but give you the idea of the main points:

  • At line 17 and 18 the labels from the running container are inspected to check if SERVICE_NAME and SERVICE_ENDPOINT are available. There it will be defined the name to use for the services inside of nginx and the path/endpoint to use for them.
  • At line 23 we add the endpoint information into the etcd, as you can see we can have only one endpoint per service name;
  • At line 31 you can see that the address (docker host ip and port) are added, in this case we can have more than one (in the eventuality of more than one container deployed to the same endpoint).

With this setup any new container that comes up will be added to etcd and although the example code here doesn’t have it, we would do the opposit when one of the containers come down.

In the next post we will see how all this sit together with confd and the nginx configuration to replace it on the fly when new docker containers are added to the host.

Articles that helped me heavily on this exercise:

The repository with the implementation of this code can be found here: https://github.com/laerteocj/docker-etcd-confd-nginx.

«
»

    Leave a Reply

    Your email address will not be published. Required fields are marked *