DockOne Technology Sharing (20): Docker Swordsman Swarm Introduction
Swarm project is a member of the Docker company released three members of the Three Musketeers, used to provide container cluster services, the purpose is to better help users manage multiple Docker Engine, user-friendly, like the use of Docker Engine, like the use of container cluster services The Swarm project status, Sway community status and Swarm future planning three aspects of the introduction of Swarm, the purpose is to let everyone have a complete understanding of Swarm, and hope that more people to use the Swarm project.
In reality, there may be a lot of our applications, the application itself may be very complex, a single Docker Engine can provide the resources may not be able to meet the requirements. And the application itself will have the reliability requirements, hoping to avoid a single point of failure, so it is bound to be distributed in multiple Docker Engine. In such a big background, the Docker community produced the Swarm project.
- Docker Swarm's performance over Kubernetes?
- Rancher 1.5 full release!
- Shipyard-swarm-agent After running for some time, access the local etcd problem
- Code-level dry goods | Advanced Docker 1.12, the new distributed application bundles
- Quickly build a minimum feasible container cluster with Kubernetes Minikube & Docker Swarm
- Rancher 1.0 officially released
What is Swarm?
Swarm this project name is particularly appropriate. In the Wiki explanation, Swarm behavior refers to the animal's cluster behavior. For example, our common bee group, fish, autumn south of the geese can be called Swarm behavior.
Swarm project is the case, by bringing together multiple Docker Engine together to form a large docker-engine, external container services to provide containers. At the same time the cluster provides Swarm API, the user can use the same Docker Engine Docker cluster.
- External to Docker API interface rendering, this brings the advantage is that if the existing system using Docker Engine, you can smooth the Docker Engine cut to Swarm, without changing the existing system.
- Swarm for the user, before using Docker's experience can be inherited. Very easy to get started, learning costs and secondary development costs are relatively low. At the same time Swarm itself focused on Docker cluster management, very lightweight, very little resource consumption.
* "Batteries included but swappable", simply say, is the plug-in mechanism, Swarm in the various modules are abstracted out of the API, according to some of their own characteristics to achieve customization.
- Swarm itself on the Docker command parameters to support the more perfect, Swarm Docker is currently being released simultaneously. Docker's new features, will be the first time in Swarm reflected.
Swarm frame structure
- Swarm provides two APIs, one is the Docker API, which is responsible for the lifecycle management of container mirroring, and the other is the Swarm cluster management CLI for cluster management.
- Scheduler module, the main implementation of the scheduling function. When creating a container through Swarm, the Scheduler module will select an optimal node, which contains two sub-modules, namely Filter and Strategy, Filter is used to filter the node, find the node to meet the conditions (such as enough resources, node normal And so on), Strategy used to filter out the node in accordance with the strategy to select an optimal node (such as the node to find the comparison, find the most resources of the node, etc.), of course, Filter / Strategy users can customize.
- Swarm abstracted the cluster, abstracted the Cluster API, Swarm support two clusters, one is Swarm's own cluster, another Mesos-based cluster.
- The LeaderShip module is used by Swarm Manager's own HA to be implemented in the active and standby manner.
- Discovery Service service discovery module, this module is mainly used to provide node discovery function.
- In each node, there will be an Agent for connecting Discovery Service, reported Docker Daemon IP port information, Swarm Manager will be directly from the service discovery module to read the node information.
Swarm modules are introduced
Swarm Manager CLI for cluster management. We can see this map, through three steps can be created to build the cluster.
Swarm container cluster creation is complete, you can use the Docker command, like the use of Docker Engine using Swarm cluster to create a container.
Service discovery, mainly used in the Swarm node found that each node on the Agent will docker-egine IP port registered to the service discovery system. The Manager reads the node information from the service discovery module. Swarm service discovery support has three types of back-end:
The first, is hosted discovery service, is the Docker Hub provides service discovery services, need to connect to external network access.
The second is the KV distributed storage system, now supports etcd, ZooKeeper, Consul three.
The third is static IP. You can use the local file or directly specify the node IP, this way does not require the use of additional use of other components, generally used in debugging.
Scheduling Module When selecting a primary user container, select an optimal node. In the process of selecting the optimal node, it is divided into two stages:
The first stage is to filter. According to the conditions to filter out the requirements of the node, the filter has the following five:
- Constraints, constraints filter, according to the current operating system type, kernel version, storage type and other conditions to filter, of course, can also customize the constraints, when the Daemon, through the Label to specify the current host has the characteristics.
- Affnity, affinity filters, support for container affinity and mirror affinity, such as a web application, I would like to DB containers and Web containers together, you can use this filter to achieve.
- Dependency, dependent on the filter. If you use the –volume-from / – link / – net container when creating the container, the container you create is on the same node as the dependent container.
- Health filter, according to the node state to filter, will remove the fault node.
- Ports filter, according to the use of the port filter.
The second stage of scheduling is to select an optimal node according to the policy. There are three strategies:
- Binpack, under the same conditions, select the resources used by the most nodes, through this strategy, the container can be gathered together.
- Spread, under the same conditions, select the resource with the least number of nodes, through this strategy, the container can be evenly distributed on each node.
- Random, randomly select a node.
Leadership module, this module is mainly used to provide Swarm Manager own HA.
In order to prevent the Swarm Manager single point of failure, the introduction of the HA mechanism, Swarm Manager itself is stateless, it is still very easy to achieve HA. In the process of implementation of the main equipment, when the main node fails, will be selected from the new election services, the main process of the use of distributed lock implementation, and now supports etcd, ZooKeeper, Consul three types of distributed storage, used to provide distributed lock. When the standby node receives the message, it forwards the message to the master node.
Above is the framework of the various modules related to the introduction and down with everyone to look at, Swarm and the integration of the surrounding projects.
First look at the integration with the Three Musketeers.
Swarm integrates with surrounding projects
Three Musketeers are Docker company released three projects at the end of the year, the three can be closely coordinated. You can look at this picture:
The bottom is Machine, through the Machine can be created on different cloud platform contains docker-engine host. Machine through the driver mechanism, currently supports multiple platforms docker-egine environment deployment, such as Amazon, OpenStack and so on. Docker Engine created later, the Swarm play, Swarm will be on each host docker-egnine management, external container cluster services. Above is the Compose project, Compose project is mainly used to provide container-based application of the arrangement. The user describes the application of multiple containers through the yml file, parses the yml by Compose, calls the Docker API, and creates the corresponding container on the Swarm cluster.
We know that now around Docker has produced a big one for the ecosphere. So Swarm not only in their own brothers and integration, but also positive and some of the integration of the surrounding projects. For example, Swarm is now able to integrate with Mesos. Swarm and Mesos integration, but also to Framework way integration, to achieve the Framework required interface. This big feature is in the experiment phase.
Sway community status quo
Swarm project released at the end of the year, the development of a short period of six months, has reached the 0.4 version, is still in the rapid evolutionary stage. Swarm release version cycle is now released along with Docker, basically a two-month version, in the development process, the use of iterative approach to development, basically every two weeks to complete a round of iteration. The approach to the community is largely consistent with other communities. When you encounter problems, you can create an issue in the community, and then describe the problem, it is best to the environment information and the steps to reproduce the problem, which is conducive to the positioning of the problem. Of course, can also directly through the IRC or mail direct communication. Swarm community is very welcome to everyone's participation, whether it is used in the problems and Bug, or Swarm function is currently unable to meet everyone's place. Are welcome to put forward, discuss together.
If you are interested in the code, you can refer to the Docker community to submit the code process to submit the code, but also very welcome to participate in the Swarm community to submit code.
Swarm future planning
- The first is to support all the Docker APIs, and now the support rate is about 95%, some of which are still problematic and need to be improved.
- The second is the network part, through the Libnetwork project, to achieve overlay network.
- The third piece is Self healing, through this one function can be achieved, when a node failure, the fault node will be on the other nodes to create the container.
- The fourth is Global Scheduler. This feature is mainly used to create a container on each node. For example, want to log a container in each node to create, used to record the log, you can use this feature to achieve.
- Finally the volume, this community has recently been discussed.
Q & A
Q: Kubernetes and Swarm compared to how to choose?
A: a very open topic, according to the characteristics of the choice for their own OK. Swarm external Docker API, its own lightweight, learning costs, secondary development costs are relatively low, itself is a plug-in framework. Functionally speaking, Swarm is Q: a subset of Kubernetes, personal feeling, Compose + Swarm = Kubernetes.
Q: Swarm what is the ultimate goal, just to manage the container, there is no consideration to enhance the utilization of resources, will stretch the resources to do it, and ultimately to increase the load of all machines to prevent some low load or empty load waste of resources?
A: Auto-scaling ability, personal feeling may be achieved through the Compose, interested, you can in the Swarm community to mention an proposal.
Q: Swarm on the choice of nodes can be customized, refers to the choice of strategy, I feel only three are not strong enough?
A: Yes, you can implement the corresponding API according to your own characteristics.
Q: call Swarm API and Swarm tune Docker API security certification is how to do?
A: Security This part is through the SSL protocol to achieve communication security and authentication, support Swarm external (such as with the Client) to provide communication between the security, and Swarm and Docker Engine also supports communication between the security.
Q: How does Swarm cross-link?
A: A cross-node is not currently supported. If a link is used, the container for the created container and the link will be dispatched to the same node.
Q: Swarm's scheduling system is also a plug-in form? Can I use Mesos resource scheduling?
A: Swarm scheduler is a plugin form. Mesos uses a two-tier scheduling framework, the first layer, by the mesos will meet the framework of the resources reported to the framework, the second layer, the framework (swarm) its own scheduler to allocate resources to the task.
Q: How is Swarm IP managed? Swarm under the various nodes is the dynamic allocation of IP?
A: the current part of the network or docker-engine their own capacity, follow-up and libnetwork will be integrated, how specific management is under discussion.
Q: Does swarm support according to the docker's tag to schedule it?
A: support, through the Constraints Filter to achieve.
Q: network part of the integration In addition to libnetwork there are other plans or consider it?
A: libnetwork itself also provides a plugin mechanism, personal understanding, and other network projects are well integrated.
The above content based on the September 8, 2015 micro-credit group to share content. Share the line Super Bo, Huawei IT cloud computing architecture and design department senior engineer, engaged in the direction of cloud computing technology research, is currently responsible for Docker related technology in the field of cloud computing technology research and practice. Early in the year 2015 began to focus on the Docker Swarm project, and actively participate in community contributions, becoming the first Docker community Maintainer. DockOne organizes targeted technology sharing every week, and welcomes interested students to add a letter: liyingjiesx, who is interested in listening to your topic.