Ten minutes to understand you Kubernetes core concept

This article will briefly introduce Kubernetes ' core concepts. Because these definitions can be found in the Kubernetes document, so the article will also avoid the use of large sections of boring text. Instead, we use some charts (some of which are animations) and examples to explain these concepts. We found that some concepts (such as Service) if there is no chart of the auxiliary is difficult to fully understand. In the right place we will also provide links to the Kubernetes documentation for readers to learn more.
This is the beginning.

What is Kubernetes?

Kubernetes (k8s) are open source platforms for automated container operations, including deployment, scheduling, and cluster-to-cluster expansion. If you have ever used Docker container technology to deploy the container, you can think of Docker as a low-level component used internally by Kubernetes. Kubernetes not only support Docker, but also support Rocket, which is another container technology.
Use Kubernetes to:

  • Automate container deployment and replication
  • Expand or shrink the container at any time
  • Organize the containers into groups, and provide load balancing between the containers
  • It is easy to upgrade the new version of the application container
  • Provide container flexibility, replace it if the container fails, etc …

In fact, using Kubernetes requires only one deployment file , you can deploy a complete cluster of multi-tiered containers (front end, background, etc.) using a single command:

  $ Kubectl create -f single-config-file.yaml 

Kubectl is a command-line program that interacts with the Kubernetes API. Now introduce some of the core concepts.

Cluster

A cluster is a set of nodes that can be a physical server or a virtual machine on which the Kubernetes platform is installed. The following figure shows such a cluster. Note that the diagram in order to emphasize the core concept has been simplified. Here you can see a typical Kubernetes diagram.
1.png
The above figure can see the following components, using a special icon that Service and Label:

  • Pod
  • Container
  • Label ( Label )(label)
  • Replication Controller (Replication Controller)
  • Service Enter image description here )(service)
  • Node (node)
  • Kubernetes Master (Kubernetes master node)

Pod

Pod (green box above) is arranged on the node, containing a set of containers and volumes. The same Pod container share the same network namespace, you can use localhost to communicate with each other. Pod is short, not a persistent entity. You may have these problems:

  • If the pod is short, how can I keep the container data so that it can exist across the reboot? Yes, Kubernetes supports the concept of volumes , so you can use persistent volume types.
  • Whether to manually create Pod, if you want to create multiple copies of the same container, need to be created separately one by one? You can create a single pod manually, but you can also use Replication Console to create multiple copies using the Pod template, as described in more detail below.
  • If the Pod is short, then the IP address may change when restarting, so how can the front container correctly and reliably point to the background container? You can use Service at this time, as described in more detail below.

Lable

As shown in the figure, some Pods have Label ( Enter image description here ). A Label is a pair of key / value pairs to attach to Pod to pass user-defined attributes. For example, you might create a "tier" and "app" tag, mark the front-end Pod container with Label ( tier = frontend, app = myapp ), and use Label ( tier = backend, app = myapp ) to mark the background pod. You can then use Selectors to select a pod with a specific Label and apply Service or Replication Controller to it.

Replication Controller

Whether to create a Pod manually, if you want to create multiple copies of the same container, need to be created separately, whether the Pods can be assigned to the logical group?

Replication Controller ensures that the specified number of Pod "copies" are running at any time. If you created a Replication Controller for a Pod and assigned three replicas, it creates three pods and continues to monitor them. If a Pod does not respond, then Replication Controller will replace it, keeping the total number of 3. As shown in the following animation:
2.gif
If the Pod is not responding before, there are now four Pods, then Replication Controller will hold one of the total stops to 3. If you change the total number of replicas to 5 during operation, Replication Controller immediately starts two new pods, ensuring a total of 5. You can also narrow down the pod in this way, which is useful when performing a rolling upgrade .

When creating a Replication Controller, you need to specify two things:

  1. Pod Template : A template used to create a copy of a Pod
  2. Label : Replication controller needs to monitor the Pod's label.

Now that you've created some copies of Pod, how do you balance the load on these replicas? What we need is Service.

Service

If the Pods are short, then the IP address may change when restarting, how can the front container correctly and reliably point to the background container?

Service is a layer of abstraction that defines a series of pods and access to these pods. Service Find the Pod group by Label. Because Service is abstract, so in the chart usually do not see their existence, which makes this concept more difficult to understand.

Now, assume that there are two background Pod, and define the background service name 'backend-service', lable selector ( tier = backend, app = myapp ). Backend-service of the Service will complete the following two important things:

  • The DNS entry for a local cluster is created for the Service, so the front-end Pod only needs the DNS lookup host named 'backend-service' to resolve the IP address available to the front-end application.
  • Now the front end has been the background service IP address, but it should visit two background Pod which one? Service provides transparent load balancing between the two background pods and distributes the request to any of them (as shown in the animation below). Is done by the kube-proxy running on each Node. There are more technical details here .

The following animation shows the functionality of the Service. Note that the diagram is made a lot simpler. If you do not enter the network configuration, then the transparent load balancing objectives involved in the underlying network and routing is relatively advanced. If you are interested, there are more in-depth introduction.
3.gif
There is a special type of Kubernetes Service, called ' LoadBalancer ', used as an external load balancer to balance traffic between a number of Pods. For example, it is useful for load balancing Web traffic.

Node

The node (pictured orange box) is a physical or virtual machine, as a Kubernetes worker, commonly known as Minion. Each node is running the following Kubernetes key components:

  • Kubelet: is the master node agent.
  • Kube-proxy: Service uses it to route the link to the Pod, as described above.
  • Docker or Rocket: Kubernetes uses container technology to create containers.

Kubernetes Master

The cluster has a Kubernetes Master (purple box). Kubernetes Master provides a unique perspective of the cluster and has a range of components, such as the Kubernetes API Server. The API Server provides REST endpoints that can be used to interact with the cluster. The master node includes the Replication Controller used to create and replicate Pod.

The next step

Now that we have learned the basics of the Kubernetes core concept, you can read the Kubernetes user manual further. The user manual provides a quick and complete study of the documentation.
If you can not wait to try Kubernetes, you can use Google Container Engine . Google Container Engine is a hosted Kubernetes container environment. After a simple registration / login, you can try the example above.

Source: Learn the Kubernetes Key Concepts in 10 Minutes (Translation: Cui Jingwen)
===========================
Translator introduction Cui Jingwen, now working at IBM, senior software engineer, responsible for IBM WebSphere business process management software system testing work. Has worked for VMware in the quality assurance of desktop virtualization products. Has a strong interest in virtualization, middleware technology, business process management.

    Heads up! This alert needs your attention, but it's not super important.