In the previous post, I described how to setup a docker image to host your CherryPy application. In this installment, I will present a complete – although simple – web application made of a database, two web application servers and a load-balancer.
Setup a database service
We are going to create a docker image to host our database instance, but because we are lazy and because it has been done already, we will be using an official image of PostgreSQL.
$ docker run --name webdb -e POSTGRES_PASSWORD=test -e POSTGRES_USER=test -d postgres
As you can see, we run the official, latest, PostgreSQL image. By setting the POSTGRES_USER and POSTGRES_PASSWORD, we make sure the container creates the according account for us. We also set a name for this container, this will be useful when we link to it from another container as we will see later on.
A word of warning, this image is not necessarily secure. I would advise you to consider this question prior to using it in production.
Now that the server is running, let’s create a database for our application. Run a new container which will execute the psql shell:
$ docker run -it --link webdb:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U test' Password for user test: psql (9.4.0) Type "help" for help. test=# CREATE DATABASE notes; CREATE DATABASE test=# \c notes \dt You are now connected to database "notes" as user "test". List of relations Schema | Name | Type | Owner --------+------+-------+------- public | note | table | test (1 row) notes=#
We have connected to the server, we then create the “notes” database and connect to it.
How did this work? Well, the magic happens through the –link wedb:postgres we provided to the run command. This tells the new container we are linking to a container named webdb and that we create an alias for it inside that new container. That alias is used by docker to initialize a few environment variables such as:
POSTGRES_PORT_5432_TCP_ADDR: the IP address of the linked container POSTGRES_PORT_5432_TCP_PORT: the exposed port 5432 (which is quite obviously the server's port)
Notice the POSTGRES_ prefix? This is exactly the alias we gave in the command’s argument. This is the mechanism by which you will link your containers so that they can talk to each other.
Note that there are alternatives, such as weave, that may be a little more complex but probably more powerful. Make sure to check them out at some point.
Setup our web application service
We are going to run a very basic web application. It will be a form to take notes. The application will display them and you will be able to delete each note. The notes are posted via javascript through a simple REST API. Nothing fancy. Here is a screenshot for you:
By the way, the application uses Yahoo’s Pure.css framework to change from bootstrap.
Simply clone the mercurial repository to fetch the code.
$ hg clone https://Lawouach@bitbucket.org/Lawouach/cherrypy-recipes $ cd cherrypy-recipes/deployment/container/webapp_with_load_balancing/notesapp $ ls Dockerfile webapp
This will download the whole repository but fear not, it’s rather lightweight. You can review the Dockerfile which is rather similar to what was described in my previous post. Notice how we copy the webapp subdirectory onto the image.
We can now create our image from that directory:
$ docker build -t lawouach/webapp:latest .
As usual, change the tag to whatever suits you.
Let’s now run two containers from that image:
$ docker run --link webdb:postgres --name notes1 --rm -p 8080:8080 -i -t lawouach/webapp:latest $ docker run --link webdb:postgres --name notes2 --rm -p 8081:8080 -i -t lawouach/webapp:latest
We link those two containers with the container running our database. We can therefore use that knowledge to connect to the database via SQLAlchemy. We also publish the application’s port to two distinct ports on the host. Finally, we name our containers so that can we reference them in the next container we will be creating.
At this stage, you ought to see that your application is running by going either to http://localhost:8080/ or http://localhost:8081/.
Setup a load balancer service
Our last service – microservice should I say – is a simple load-balancer between our two web applications. To support this feature, we will be using haproxy. Well-known, reliable and lean component for such a task.
$ cd cherrypy-recipes/deployment/container/webapp_with_load_balancing/load_balancing $ ls Dockerfile haproxy.cfg
Tak some time to review the Dockerfile. Notice how we copy the local haproxy.cfg file as the configuration for our load-balancer. Build your image like this:
$ docker build -t lawouach/haproxy:latest .
And now run it to start load balancing between your two web application containers:
$ docker run --link notes1:n1 --link notes2:n2 --name haproxy -p 8090:8090 -p 8091:8091 -d -t lawouach/haproxy:latest
In this case, we will be executing the container in the background because we are blocking on haproxy and it won’t lok to the console anyway.
Notice how we link to both web application containers. We set short alias just by pure lazyness. We publish two ports to the host. The 8090 port will be necessary to access the stats page of the haproxy server itself. The 8091 port will be used to access our application.
To understand how we reuse the the aliases, please refer to the the haproxy.cfg configuration. More precisely to those two lines:
server notes1 ${N1_PORT_8080_TCP_ADDR}:${N1_PORT_8080_TCP_PORT} check inter 4000 server notes2 ${N2_PORT_8080_TCP_ADDR}:${N2_PORT_8080_TCP_PORT} check inter 4000
We load-balance between our two backend servers and we do not have to know their address at the time when we build the image, but only when the container is started.
That’s about it really. At this stage, you ought to connect to http://localhost:8091/ to see use your application. Each request will be sent to each web application’s instances in turn. You may check the status of your load-balancing by connecting to http://localhost:8090/.
Obviously, this just a basic example. For instance, you could extend it by setting another service to manage your syslog and configure haproxy to send its log to it.
Next time, we will be exploring the world of CoreOS and clustering before moving on to service and resource management via Kubernetes and MesOS.