How to deploy Kubernetes cluster to OpenStack with Ansible

Article by the only cloud technology translation, if reproduced, must be marked reproduced from the "only cloud technology"

Before we talked about running OpenStack on K8S, how to build a private cloud solution based on Kubernetes in the TCP cloud, used in production or started virtualization at OpenStack. Today, for a gesture, let's look at how to run the Kubernetes cluster on an OpenStack virtual machine.

I was impressed by the number of people who were interested in the container at the recent Austin OpenStack Summit. Almost all container-related meetings are now aware of its advantages. By hosting the application, you can virtualize the host operating system. This means that you can create a quarantine environment for each container in the host operating system, such as the file system, the network stack, and the process space, so that the containers are visible to each other. In addition, the container is lightweight, lightweight, not only cross-operating system, you can cross the cloud. These capabilities enable developers to quickly build, deploy, transport, and scale applications that are not possible in virtual machine environments.
At the summit, I had the privilege of presenting myself to the audience who had a great deal of help to our team. Can click here to view my presentation: https://www.openstack.org/vide … sible
This project is intended to build an automatic Kubernetes deployment in our OpenStack Kilo environment. In this blog post, I will describe our solution and provide an overview of the code repurchase above Github. You can use some of the code to create your own Kubernetes cluster for automatic deployment. Remember that the software is only tested in the development environment, and for any production deployment, make sure you have done the necessary due diligence as usual.
First question to answer is kubernetes and ansible what, why choose them? Kubernetes (K8S) is a platform that organizes and manages Docker containers by calling APIs. In addition to the basic choreography function, it also has the function of continuously driving the control process, facing the user to specify the required state. When using this platform, you can group your application containers into a combo unit called pod. Pod is a shared network and storage container group. When you create a Docker container, the default settings, each container will get its own network namespace is its own TCP / IP stack. Kubernetes is set to Docker with -net = "<container-name> | <container-id>" to combine the network space of all pod containers together. This setting allows the container to use another container's network stack again. The K8S is done by creating a pod layer holding container and its own network stack, and all pod containers are configured to reuse the network space that holds the container.
At the pod level, Kubernetes offers a variety of services, such as scheduling, copy, self-repair, monitoring, naming / discovery, identification, authentication authorization, and so on. Kubernetes also have plug-ins templates that allow developers to write their own modules and then create services on top of the platform. Like this blog, Kubernetes is one of the most advanced open source platforms for writing and managing Docker containers.
We chose Ansible because it is one of the hottest, most direct and easiest to use automation platforms. It runs fewer agents, uses ssh on the infrastructure to log in to the system, and executes the policies that you describe in the playbook file. These strategies are modeled as a list of tasks in a yaml format. In the absence of automation, these are the manual tasks that must be performed by the administrator to deploy the infrastructure software.
This blog post describes the Kubernetes cluster running on the OpenStack virtual machine. A K8S cluster has a master node, which runs the API server and a set of worker nodes that run on the pod container. The settings are using Ansible (> 2.0), Ubuntu and Neutron networks. After the test, use OpenStack kilo to publish. Ansible deploys the K8S software component, starts the virtual machine, classifies the virtual machine to the master and worker nodes, and then deploys the Kubernetes key list. We use neutron to provide a network connection to the OpenStack virtual machine and the K8S pod container. All virtual machines running in the test environment run the Ubuntu 14.04 server operating system.
The following figure shows the various software components in operation, and how they interact in the cluster. I will use this chart as an information to describe the automatic process, when you browse this blog, see the block diagram will feel justified.

屏幕快照_2016-05-13_下午6.18_.36_.png

Set up

This setting is assuming you already have an OpenStack cloud running core services such as Nova, Neutron, Glance and Keystone. You also need to … 2.x ansible version in a voucher and network connected ssh to compute nodes and virtual machines. The ansible node also needs to be able to access the openStack API. I installed ansible with these commands on my Macbook above:

Sudo easy_install pip
Sudo pip install ansible

After you have installed ansible, use the command line: "ansible-version" to verify its version. It should output a 2.X release version.

Kubernetes Cluster Deployment

The automated cluster configuration is controlled by three ansible playbooks. You can click here to take playbooks, templates and code: https://github.com/naveenjoy/microservices . The three playbooks are:

Launch-instances.yml – launches kubernetescluster instances
· Deploy-docker.yml – deploys docker onall of the cluster instances
Deploy-kubernetes.yml – deploys kubernetescontrol and worker software components and brings up the cluster

All the playbooks get their input variables from a file called settings.yml, which is referenced by the settings file. Set the node code dictionary in the file and their original data (also called the label) in the cluster to specify the name of the node, the label is injected into the node when the application is started. These tags are used when the playbooks are run by the cloud inventory script ( https://github.com/naveenjoy/m … ry.py ) to classify the nodes as master and worker. For example, the node labeled ansible_host_groups is k8s_master will be classified as the master node, and the label value is equal to k8s_worker will be classified as workers. The setup file also includes a code dictionary called os_cloud_profile that provides ansible with nova virtual machine startup settings. To open the instance, run the playbook as follows:

Ansible-playbook -i hosts launch-instances.yml.

If everything goes well, you will see that all instances of Nova have been created exactly on the OpenStack cloud. These examples provide the underlying infrastructure to run the K8S cluster. After adding the instance, you can run the remaining playbooks to deploy Docker and Kubernetes. When the playbook is running, use the named 'inventory.py' inventory script to classify the nodes so that the control and worker components are deployed to the correct virtual machine.
Run the playbooks as follows:

Ansible-playbook-i scripts / inventory.py deploy-docker.yml
Ansible-playbook-i scripts / inventory.py deploy-kubernetes.yml

K8S cluster control panel includes API server, scheduler, etcd database and kube controller manager through a master key list file. This file name is master-manifest.j2 which can be found in the template folder. The version of the K8S Control Panel software is determined by the settings file. The playbook called deploy-kubernetes.yml is the first time to download and deploy kubelets and kube-proxy binary, and open the two services on all nodes. Then the master-manifest template file will be deployed on the master node to a config directory called / etc / kubernetes / manifest. This directory is monitored by the kubelet daemon process, which opens all the Docker containers that provide control panel services. When you use the Docker ps command, you will see kube-apiserver, kube-controller-manager, etcd and kube-schedules processes running on their own containers in the master node.
The API server is configured to use the HTTPS service API. The certificate required by SSL is generated by running the make-ca-cert.sh script as one of the playbook tasks. This script generates the following certificate in the certificate directory on each node. This is generated on each node because the Docker daemon also uses the same server certificate to configure TLS. The cert file directory value is also configurable in the settings file.

Ca.pem – Self-signed CA certificate

Server.crt / server.key – Signed kube server-side certificate and its key file. The cert file can also be used by the Docker Daemon process to ensure secure access to the client.
Cert.pem / key.pem – signed client certificate and its key file. Kubectl and docker customers.
On top of the client, you can find these certs folders in repo. Use the convention <nodename> .env in the client to create a Docker environment file for each node. You can track the source of this environment variable and then run the Docker client instead of the Docker host. For example, to run the Docker command on the master node named master1, the first step is to execute "source master1.env" and run the command. Similarly, for the kubectl client, the config file is created by the necessary credentials and the cluster master IP address. Config files can be found in $ HOME / .kube / config. So that you can run the kubectl command on the terminal window on the cluster.

屏幕快照_2016-05-13_下午6.19_.09_.png

In this blog post, I will describe how to use the OpenStack neutron service to connect to the K8S pods. This is somewhat similar to the setting of GCE. In fact, you can also choose other, such as Flannel, using UDP encapsulation in the existing tenant neutron to create a coverage network option for the routing pod. Use neutron to remove the overlay network for the pod network.
It is important to note that each pod (that is, a set of containers) in the K8S has an IP address. This is different from the network template in Docker, where each container has a host private IP address. In order for the K8S network to run, the pod's IP address must be NAT-free and routable. This also means two things:
A) When a pod container communicates with other pod containers, the communication must be direct and no NAT.
B) When a pod container communicates with the virtual machine's IP address, the communication must be direct and no NAT.
In order to accomplish the above purpose, the first step is that the default docker bridge named docker0 in each node is replaced by a Linux bridge called cbr0. Across all nodes, an IP module is assigned to a pod network, such as / 16. This module is abstracted, and the node-to-pod mapping is created in a setup file. In the above chart, I assigned 10.1.0.0/16 to the pod network and then created the following mapping:

Node1: 10.1.1.1/24
Node2: 10.1.2.1/24

NodeN: 10.1.n.1 / 24

Create -bridge.sh (create-bridge.sh) script to create cbr0, and then use the mapping defined in the settings file to configure the IP address of the pod subnet.
The second step is to configure the tenant router to route traffic to the pod subnet. For example, in the above block diagram, the tenant router must be configured to the path, and then configure the traffic to the pod subnet 10.1.1.0/24, configured to the ethernet address on node # 1 on private-subnet # 1. Similarly, the path must be added in the cluster to point to the destination of each node to route traffic to the pod network. Use the add_neutron_routes.py script to complete this step.
The third step is to add IP tables rules to impersonate traffic from the pod subnet to the outbound connection. This is because the neutron tenant router does not know that it needs to traffic from the pod subnet SNAT.
The last step is to open the IP kernel on each node in the Linux kernel, to the path package, and then to the bridge container network. These tasks are performed by playbook deploy-kubernetes.yml.
The final result of running this playbook is that the neutron network is now programmed to route the pods to the network.
Note: By default, as an anti-spoofing security measure, neutron installs the iptables firewall rule on the hypervisor to control traffic traffic on the virtual machine port. So, when the traffic flows into the port of the pod network to the virtual machine, it is filtered by the hypervisor firewall. Fortunately, there is a neutron extension called AllowedAddressPairs that allows the Havana release version of the pod subnet to pass through the hypervisor firewall.

Exposure to Pod Services

For practical purposes, each pod must be placed in front of the service abstraction. This service can be used to connect to the application running inside the pod container to provide a stable IP address. This is because the pod can be dispatched on any node, and any IP address can be obtained from the allocated node_pod_cidr range. Likewise, when you expand / shrink these pods to accommodate traffic changes, or when the failed pods are created again through the platform, their IP addresses will change. From the customer's point of view, service abstraction to ensure that the pods IP address remains fixed. It is important to note that for services, CIDR, which is cluster_cidr, survives only at each node and does not need to be routed by the neutron tenant router. This service IP traffic is distributed by k8-proxy to the backup pod by proxy function, which is usually implemented in iptables with each node.
This stable service IP can be exposed to Kubernetes' NodePort performance to the outside. What the node port does is that it uses the IP address of the worker node and a high TCP port of 31000 to expose the service IP address and port to the outside. So if you allocate a floating IP to the node, the application will provide traffic in that IP and its node IP. If you use a neutron load balancer, add the worker node member and write the vip distribution traffic to the node port. This method has been described in the above block diagram.

Service discovery

Service discovery can be fully automated using the DNS cluster add-on service. You can use skydns-manifest and skydns-service to deploy. K8S will automatically assign a DNS name to each service defined in the cluster. So the program running in the pod can find the cluster DNS server to solve the service name and location. Cluster DNS services support A and SRV record lookups.

in conclusion

I hope this blog post explains how to use Ansible to create a Kubernetes cluster on OpenStack.

Article by the only cloud technology translation, if reproduced, must be marked reproduced from the "only cloud technology"
Please click the "Original link" to view the original. : Http://blogs.cisco.com/cloud/d …% 23rd

To find out more about Kubernetes, please pay attention to the Caicloud public platform

图片_1副本.png

    Heads up! This alert needs your attention, but it's not super important.