Kubernetes Methodology: Expansion and Reliability
In the first article, we explored the concept of pods and services in Kubernetes. Now, let's take a look at how to use RC to complete flexible expansion and reliability. We will also discuss how to bring persistence into the cloud native application that is placed on Kubernetes.
RC: Flexible expansion and management of micro services <br /> If pods is a unit, deploy and services are abstraction layers, then who will track the health of pods?
So RC is so played.
After the pods are deployed, they need to be expanded and need to be tracked. The RC definition file has a pods number of baseline configurations that are available at any given point. Kubernetes ensures that the required configuration options are maintained by tracking the number of pods. It will kill some pods, or create some to meet the baseline configuration.
RC can track the health of pods. If a pod becomes harder, then it will be killed, and then some new pod will be created. Since an RC is essentially inherited from the pod definition, the YAML or JSON manifest may contain properties for the restart policy, the container survey, and the health check endpoint.
Kubernetes supports pod automatic flexibility based on CPU utilization, which is somewhat similar to EC2 auto-expansion or GCE auto-expansion. At runtime, RC can be manipulated to automatically expand pods based on a specific CPU utilization threshold. The maximum and minimum values of the number of pods can also be specified under the same command.
Flat Network: Secret Weapon <br /> The network is also one of the complex challenges facing the containerization process. The only way to expose a container to the outside world is through the host's port. But when the expansion of the container will become complicated. Kubernetes does not leave the network configuration and integration to the administrator, but comes with a network model, which is very easy to use.
Each node, service, pod, and container have an IP address. The IP address of the node is assigned by the physical router; in conjunction with the assigned port, it becomes the endpoint to access the service-oriented. Although it is not routable, the Kubernetes service is also available for IP addresses. All the communication is based on the absence of NAT layer generated, making the network flat, transparent.
This model will bring some benefits:
All the containers do not need NAT can also communicate with each other All nodes do not need NAT can also communicate with all the pods and containers in the cluster Each container with the same container to see the same IP address on the expansion of pods through RS The best point is that the port mapping is handled by Kubernetes. All pods belonging to the service are exposed to each node through the same port. Even if no pod is scheduled on a particular node, the request is automatically forwarded to the appropriate node.
This magical function is through the kube-proxy, iptables and etcd these network agents to achieve a combination. The current state of the cluster is to use etcd to maintain, which means that when running through kube-proxy query. By manipulating iptables on each node, the kube-proxy bets the r literword equest to the correct destination.
Kube-proxy also handles the underlying load balancing of services. Service endpoints are also managed with Docker links through environment variables. These variables are broken down to the port, and the port is exposed to the outside through the service. Kubernetes1.1 includes an option to use local iptables, this option will bring 80% latency. This design eliminates CPU overhead, thus improving efficiency and enhancing scalability.
Persistence: bringing the state to the container <br /> The container is short. When they move from one host to another, they do not contain a state. For product load, persistence is a must condition. Any useful application has a database that supports it behind it.
By default, pods are also short. Every time they resurrected from the blank state. It is also possible to set the data volumes shared by the containers running in the same pod. It is confirmed by emptyDir monilker that this is somewhat similar to the Docker data volume, where the host file system is exposed to a directory within the container. The emptyDir data volume tracks the pods lifecycle. When the pod is deleted, the data volume will be deleted. Because these volumes are only compatible with the host, they are not available on other nodes.
In order to bring persistent data on pods, Kubernetes supports PV and PVC requests regardless of the node on which they are scheduled. PVC and PV sharing relationship, just as pod and node the same. When a pod is created, it can be reached by claim to a specific data volume. PV can be based on a variety of plug-ins, such as GCE persistent hard drives, Amazon resilient fast storage (EBS), network file system (NFS), small computer system interface (ISCSI), GlusterFS and RBD.
Setting up a persistent workflow involves configuring the underlying file system or cloud data volume, creating a persistent data volume, and finally creating a claim to associate the pod with the data volume. This decoupling method can completely separate the pod from the data volume, or the pod does not need to know the exact file system or support its persistence engine. Some file systems, such as GlusterFS, can also be containerized, making the configuration easier and more convenient.
Conclusion <br /> The container is no longer a new concept, and Google has shipped most of its network-scale workloads in containers for decades. They learn from this process and incorporate them into the construction of Kubernetes. These lessons can also be ported to other choreography platforms or transplanted to other choreographers. Kubernetes had already solved the dilemma faced by Google SRE by a decade ago, and that was affecting the way the container arranging tools were advancing.
Most importantly, Kubernetes in the container ecosystem is already a focal point, and for other related services, its presence is like a valuable open source platform. Understanding the current role and role of Kubernetes is necessary for the market of arranging tools.
Article by the only cloud technology translation, if reproduced, must be marked reproduced from the "only cloud technology." Please click the "Original link" to view the original.