Immutable infrastructure and containers

In the article, the author explains what is immutable infrastructure , the advantages of immutable infrastructure , and how to build immutable infrastructure , and highlights two ways to build, highlighting the Tutum in the construction of the application container The process embodies the advantages.

A long time ago, I attended a Docker meeting in New York, where Michael Bryze (Gilt's CTO) talked about the immutable infrastructure advantage in using Docker's process. Some people still feel puzzled , but in Tutum, we strongly support this model that benefits from the rise of the container.

What is "Immutable Infrastructure"?

When you deploy an update to an application, under normal circumstances, you create a new instance (on a server or container) and delete the old instance instead of trying to update the original instance. Once your application starts running, do not move it again. Repetitive, reduced management overhead, easier to roll back and other advantages of natural formation, on its advantages, Chad Fowler , Michael DeHaan and Florian Motlik have been in their article in-depth discussion.

In order to achieve these advantages, you need to build an application to meet the following two basic requirements:

  • Your application process is stateless. Their status should be stored in a service, this should be in the "immutable infrastructure" outside the scope (in addition to the use of Volume, and is running the container, the problem we will solve)
  • You have a template or a set of instructions that can be used to deploy an instance of an application from scratch.

The second requirement is the key, although there are many ways to achieve, but the container is to meet this requirement to create.

Use configuration management software

In the model, the need to use the container? Technically, the answer is no. However, the use of containers can bring great help.

In the absence of a container, you can still get immutable by deploying a new virtual machine. In this new virtual machine, you can use the VM template for the new version of the application (the application can be automated), or by using configuration management software such as Chef or Puppet. The goal is to deploy new application instances from scratch and process traffic.

Once you have done this, you can switch your load balancer to start sending the request and terminate the old load balancer. With this model, your 'recipes' complexity is reduced when you remove the code for a local application upgrade.

But honestly, creating a VM template for each application that can only work on a particular cloud provider is not an ideal solution (even if it can be automated, and the process is still cumbersome), and for developers, It is an experienced practice to avoid continually testing configuration management scripts.

Working with a container

Why use containers? Because containers can be quickly created, tested, and deployed compared to snapshot operations on virtual machines, or when running scripts for configuration servers. And once your application finishes building, testing and tagging. Deploying it will be a very efficient process because you have basically removed the underlying operating system configuration from the equation.

For the latest Vanilla OS, deploy the base template for the cloud provider. With the above steps, you can delegate the task to your cloud provider. If you have patched the patch, and for the virtual machine (on the virtual machine, you do not care about the details, as long as the virtual machine can run Docker) optimized, it can even have better performance.

Every time you want to push a new version of the application, the container no longer need to deploy a new server. Because all of the application dependencies and logic have been built into the container and have been replaced by their new version, your server can keep and get the advantage of the immutable infrastructure model, which can significantly reduce development time.

Most importantly, you can still benefit from the container, for example, will not be limited by any cloud provider or Linux distribution (as long as they are running Docker), if you can work locally, then it can also be in any provider Work on. Is this not our dream?

Through the use of containers, how to make your new version of the deployment of automated work done? Here are two main steps:

  • Build your new image. While there are many ways to build application mirroring (manual, or using configuration management software, etc.), it is common practice to use a simple optimized Dockerfile . Before pushing the image, you can use the CI / CD platform for testing. In the production deployment, add the version number for the mirror, which helps to roll back the application if necessary.
  • Deploy your new container. You can deploy (manual or automatic) containers on new, or existing servers, and switch the load balancers to send traffic to your newly deployed containers. This can be instance-level (for example, using an application container, and a dynamic load balancer on each AWS EC2 instance), or it can be container-level (using haproxy or nginx servers to forward traffic to your application container ).

How do I automate step 2 when using multiple hosts and containers for an application? Then use Tutum.

Work with Tutum

By using Tutum, deploying a new version of the application becomes a trivial task. You only need to change the mirror in the service definition tag, and click on the redeployment on it.

In a non-production deployment that rolls back to a particular version that is not so important, by using our
Automatic redeployment features , or automatic redeployment of triggers associated with DockerHub, we can even automate the automated deployment process.

Once the automated deployment program starts working, Tutum will use the new container, one by one to replace the original old container. We provide a tutum / haproxy image, which is automatically configured according to the container in which it is located. Whether you are deploying Docker links locally or entering Tutum, it will automatically re-adjust itself when the connected service shrinks or is redeployed.

If you want to use the new container in parallel with the old container to roll back quickly, you do not need some super-complex tools like Asgard .

Whether you are deploying a service that uses a new mirror tag or adding a link from the tutum / haproxy service. Tutum will detect changes and automatically begin forwarding requests to new services and old services. When you are ready to switch, you only need to end the old service. By default, tutum / haproxy will automatically detect the death of the application container and send the request to the healthy container again.

What if developers use data volumes? I know that I have just said that these applications need to be stateless, but in Tutum, volumes are cross-deployed persistence exist, so if you redeploy a tutum / mysql container (by default, this operation will For / var / lib / mysql to create a data volume), Tutum will re-use the Volume, and retain its data.

Once your container starts running, do not touch it! Use the docker exec (or Tutum's "terminal" feature) to debug and run a one-time management task, do not change your application code! Changes to the application should be done in the mirror and environment variables, not the instance that is running.

and then?

We are trying to make the process from code to deployment clear, simple and powerful. We have some exciting new features that will be announced in the next few weeks. At the same time, we listen to the views of the community and welcome your thoughts and opinions.

Original links: Immutable Infrastructure and Containers (translation: Hong Guoan reviser: Wei Xiaohong)

============================================
Translator introduced Hong Guoan, programming enthusiasts, is currently a junior student, hoping to help the community through the translation, to improve their knowledge.

Heads up! This alert needs your attention, but it's not super important.