Docker Workflow (3): Arrangement tool

[Editor's Note] To play in the production environment Docker's power, can not do without the support of the arrangement tool, or will be deep container monitoring and management difficult mud. Currently there are many such tools on the market, how to choose the most appropriate one, this article gives some reference.

This is the third part of a series of articles about how we use Docker in the production environment of IIIEPE . If you have not seen the first ( translation ) and the second ( translation ) section, please go to reading and then continue. In this article, I will discuss the layout tools we have tested, our choices and reasons. I will also explain how we use Jenkins to handle the heavy work, and how it works.

Using Docker is really cool, it solves many of the problems in our workflow, but also creates some problems. There is only a lot more work to manage a container, and if you do not use the orchestration tool to do that, you are wrong. Once the number of containers began to grow, it would be really hard to manage.

All of our base mirrors on the Docker Hub use Supervisor. Supervisor is a wonderful tool to manage the process, if a process is dead, supervisor will restart it. Because Docker works, the container needs to have a process running to keep the container alive. The process can not be background, if the process of death, the container will die, you have to find ways to restart the container. This is probably the problem you want to avoid at the beginning. The best thing about Supervisor is that it can handle multiple processes, so a container running PHP applications will run PHP-FPM, Nginx, and Sendmail.

Arrangement tool

We spent about two weeks testing for a series of choreography programs. include:

  • Deis
  • Shipyard
  • Panamax
  • Kubernetes
  • Tsuru.io
  • Decking.io
  • Maestro-ng

I do not want to make a detailed review of each program, here is just a brief look.

Deis

Use Docker's PasS program. Basically it is Heroku plus Docker. The installation is simple but not flexible enough in storage. This is the most attractive option, but since we used Drupal, it was not for us. Moreover, it does not provide UI.

Shipyard

We end up using Shipyard, but just as a viewer. Shipyard is still evolving, and its biggest problem is that there is no way to simply manage containers automatically. As I said, we just use it as a viewer to monitor the status of all containers and Docker services. If a container crashes, we only need to use Shipyard to restart the container, without having to restart all the containers of the application.

Panamax

It is promising, but it is not perfect when we need it. It also relies heavily on some of the templates that I personally dislike very much. The lack of agents is a major obstacle to our testing. Basically, there is no agent we can only install Panamax on each server.

Kubernetes

PaaS program, the list of the most difficult to install and configure one. It has a lot more than we require the function, but lacks a function we need: Kubernetes does not handle storage.

Tsuru.io

PaaS program, it claims:

… Amazon S3 … is the right way to save content files to tsuru.

After reading this passage, we have not even the desire to install.

Decking.io

As one of the alternatives to Fig, there is no greater host capability is the biggest problem.

Maestro-NG

Maestro-NG wins in several ways: easy to use, has a command line interface with simple commands, supports multiple hosts, everything is described in a YAML file.

We installed a server and installed Maestro-NG on it, because we needed to open the Docker port on each web node, for security purposes, only Maestro-NG can connect to Docker. Then we organize all maestro files into a separate git project. Under the project is the application of the official domain name (FQDN) named directory, in each directory there is a separate file, a maestro.yaml file:

  /subdomain.example.com/maestro.yaml 
/another-example.example.com/maestro.yaml

If we need to test a specific project (we do not do this kind of test), just create a maestro file and push it up, and then we can treat it like any other item.

With Maestro-NG, our ongoing delivery process is reduced to two separate orders:

  Maestro pull; maestro restart 

Since it was a bit time-consuming, we let Jenkins come for us.

Jenkins

We did not use Jenkins before, so we did not practice continuous integration (CI) and continuous delivery (CD), everything was handy and very error prone. Creating a new workflow is an excellent opportunity to join it.

All of our projects have the same workflow:

  1. GitLab detected push
  2. GitLab triggers a web hook and requires Jenkins to start a new task
  3. The latest version of the Jenkins clone project
  4. Jenkins run test
  5. If the test passes, Jenkins starts building a new docker image
  6. Once the mirror is built, Jenkins pushed it to the private registry
  7. Jenkins connects to the Maestro-NG server via SSH and runs the command maestro pull ; maestro restart

For untested small projects, the whole process lasts less than 2 minutes, and some projects are even faster, between 25 and 35 seconds. The biggest project is a public project that will be pushed to the Docker Hub, which takes about six minutes. There is an exception, the project takes 18-20 minutes, which is an old HTML site, with a lot of video and large size files, the whole project is about 1.8GB size, so the construction time is very long.

When we started configuring all the required virtual machines, we decided to install Jenkins and Docker Registry on the same virtual machine. There are two reasons for this: First, this virtual machine has a lot of space, fully meet the requirements of both, our web nodes are relatively small, this virtual machine is much larger; the second is the Docker registry and Jenkins installed On the same virtual machine can reduce the push mirror to the registry transfer time. This is very effective for us.

Jenkins Quest

For normal, untested applications, Jenkins runs the following script tasks:

  Docker build -tag = our-private-docker-registry / application_name --no-cache. 
Docker login --username = "someusername" --password = "somepassword" --email = "someemail" https: //our-private-docker-registry.fqdn
Docker push our-private-docker-registry / application_name

For the tested application, Jenkins will do the following:

  Make prepare-test 
Sleep 90
Make install
Make test
Make clean-test
Docker build --tag = iiieperobot / dashi3.
Docker login --username = "iiieperobot" --email = "someemail" --password = "somepassword"
Docker push iiieperobot / dashi3

In the example above, we pushed directly to the Docker Hub, but 99% of the tasks were pushed to the private Docker Registry.

After the script is running, Jenkins connects to the Maestro-NG server via SSH and runs:

  Cd / path_to_maestro / application_fqdn; maestro pull; maestro restart 

Rebuild the base image

When the new base mirror is rebuilt, we need to rebuild all mirroring that depends on the underlying image, so there is one more task for each base image:

  Docker pull iiiepe / nginx-drupal 

We have a built action to build all the projects that depend on this image.

test

In the middle of writing, I was asked whether to use Docker when testing the project. Guess it is right, we will use when needed. Sometimes we do not use mocking instead of testing the database. When we do this, we use the steps I described above, and in order to simplify the operation, we use Makefiles, so both Jenkins and developers can pass make test to run the test.

This article first, at the same time in the last article, I will talk about service discovery and load balancing.

Source: A production ready Docker workflow. Part 3: Orchestration tools (translation: Liang Xiaoyong proofreading: Guo Lei)

Heads up! This alert needs your attention, but it's not super important.