Only cloud engineer original | Kubernetes dynamic provisioning and glusterfs docking

introduction

Traditional operation and maintenance, often need to manually manually allocate space in the storage cluster, and then can be mounted to the application. In the latest version of Kubernetes, dynamic provisioning is upgraded to beta and supports dynamic preconfiguration of multiple storage services, making it easier to take advantage of the storage capacity in the storage environment for the purpose of using storage space on demand. This article will introduce dynamic provisioning this feature, and GlusterFS, for example, illustrates the storage service and k8s docking.

Dynamic provisioning

Storage is a very important part of the container arrangement. Built from v1.2, Kubernetes provides dynamic provisioning as a powerful feature that provides on-demand storage for the cluster and supports a variety of clouds including AWS-EBS, GCE-PD, Cinder-Openstack, Ceph, GlusterFS storage. Unofficial support for storage can also be supported by writing plugin.
In the absence of dynamic provisioning, the container in order to use the Volume, need to be pre-allocated in the storage side, this process is often the administrator manual. After introducing dynamic provisioning, Kubernetes dynamically creates the storage required by calling the storage service's interface based on the volume size required by the container.

Storageclass:

Administrators can configure storageclass to describe the type of storage provided. To AWS-EBS, for example, the administrator can define two storageclass: slow and fast. Slow docking sc1 (mechanical hard disk), fast docking gp2 (solid state hard drive). Application can be based on the performance requirements of the business, respectively, select two storageclass.

Glusterfs:

An open source distributed file system with powerful horizontal scalability, through the expansion to support the number of PB storage capacity and handling thousands of clients. GlusterFS aggregates the physically distributed storage resources with TCP / IP or InfiniBandRDMA networks, using a single global namespace to manage the data.

Heketi:

Heketi ( https://github.com/heketi/heketi ) is a RESTful API-based GlusterFS volume management framework.
Heketi can be easily integrated with the cloud platform, providing RESTful API for Kubernetes calls, to achieve multi-glusterfs cluster volume management. In addition, heketi also has the advantage of ensuring that bricks and its corresponding copies are evenly distributed across the different available areas in the cluster.

Deploy dynamic provisioning based on GlusterFS
1, install GlusterFS. See the official document, do not go into details here.
2, deploy heketi. This article will deploy heketi in a containerized form. Heketi yaml as follows:
ApiVersion: extensions / v1beta1
Kind: Deployment
Metadata:
Name: heketi
Labels:
App: heketi
Spec:
Replicas
Template:
Metadata:
Labels:
App: heketi
Spec:
Containers:
– name: heketi
Image: caicloud: heketi
Ports:
– containerPort: 8080
VolumeMounts:
– mountPath: / etc / heketi
Name: heketi-volume
– mountPath: /root/.ssh
Name: ssh-volume
Volumes:
– name: ssh-volume
HostPath:
Path: /root/.ssh # This node must be able to ssh to other nodes.
– name: heketi-volume
HostPath:
Path: / root / heketi
NodeName: {{heketi_node}} # Pinned to node

After the heketi starts running successfully, the configuration of the glusterfs cluster needs to be loaded, either through the curl api or by the heketi client, as in ./heketi-cli load –json = new-cluster.json The New-cluster.json defines the information on each node IP, available partition, etc. of the glusterfs cluster. Typical configuration example: https://github.com/heketi/heke … .json

Precautions:
Heketi needs to use a private key that can quarantine all ssh to all nodes of the glusterfs cluster, and heketi will format the specified partition in the glusterfs cluster and call pvcreate and lvcreate to assemble the partition into volume group.

3. Deploy the StorageClass

ApiVersion: storage.k8s.io/v1beta1
Kind: StorageClass
Metadata:
Name: glusterfs-rep3
Provisioner: kubernetes.io/glusterfs
Parameters:
Resturl: " http://192.168.1.111:8081" // heketi address, you can also fill the domain name
Clusterid: "630372ccdc720a92c681fb928f27b53f" // optional, to use the cluster id
Restuser: "admin" / / optional, authentication user name
SecretNamespace: "default" / / optional, authentication password where the secret where the namespace
SecretName: "heketi-secret" / / optional, authentication password where the secret
GidMin: "40000" / / can use the minimum gid, optional, each gluster volume has a unique gid
GidMax: "50000" // can use the maximum gid, optional
Volumetype: "replicate: 3" // optional, glusterfs volume type, number is the number of copies

Here volumetype filled replicate: 3, the same token, we can also define other types of storage tank, such as disperse: 4: 2, said the use of error correction (disperse), every 4 brick to do 2 redundant. Volumetype: none will use a distributed volume by default.

4, create PVC (Persistent Volume Claim), specify storageclass, and declare the required storage size.
ApiVersion: v1
Kind: PersistentVolumeClaim
Metadata:
Name: gluster-pvc-10G
Annotations
Volume.beta.kubernetes.io/storage-class: glusterfs-rep3 // Specifies storageclass
Spec:
AccessModes:
– ReadWriteMany // can be read and written by multiple pods
Resources:
Requests:
Storage: 10Gi // specify the use of 10G size storage

After creating pvc, Kubernetes calls heketi's create volume API. After heketi will go to check the glusterfs cluster free space. This article specifies the storage class for rep3, so 3 nodes are required to have at least 10G of available disk space. If the conditions are met, Kubernetes will create the corresponding size PV (Persistent Volume), and bind the PVC. Otherwise, the PVC will remain in the pending state.
This article specifies three copies of the gluster storage, in fact, as long as the annotations in the storageclass change the name, the application can use other types of storage, very convenient.

5, verification
PVC successfully bind PV, you can let the application using the PVC as a storage. We can create a new debian pod, used to verify, yaml as follows:
ApiVersion: v1
Kind: Pod
Metadata:
Name: gluster-tester
Spec:
Containers:
– name: glusterfs
Image: cargo.caicloud.io/caicloud/debian:jessie
VolumeMounts:
– mountPath: "/ mnt / gluster"
Name: test-vol
Args
– tail
– "-f"
– "/ dev / null"
Volumes:
– name: test-vol
PersistentVolumeClaim:
ClaimName: gluster-pvc-10G

After entering the container's terminal by kubectl exec, you can use 10G glusterfs storage in the / mnt / glusterfs directory. Because gluster-pvc-10G is ReadWriteMany (which can be read and written by multiple pods), it can also be used in other applications to achieve the purpose of data sharing.

to sum up

You can see that there are applications that use storage requirements, only need to declare the size, specify storageclass, Kubernetes can dynamically configure the corresponding size of the storage, the application does not require the underlying storage details.
Finally, glusterfs / heketi related yaml and deployment instructions can be found at https://github.com/heketi/heke … netes .

Heads up! This alert needs your attention, but it's not super important.