Practice of Micro Service Architecture Based on Container Cloud

The birth of the micro-service architecture and the prevalence of container technology are almost simultaneously, and this is not accidental, but the Internet era against the traditional technology and architecture of the changes, and Docker as the representative of the container technology The micro service concept provides a matching implementation mechanism. The author introduces the advantage of micro service architecture from the point of view of what is micro service, and finally gives the cloud practice of micro service architecture from its own practice.

In recent years, micro-service architecture and container technology of concern, in all kinds of articles, lectures, blog frequently debut, become the industry's most popular topic. Behind the trendy vocabulary and enthusiastic discussion, people began to seriously rethink the architecture of the Internet age service and the application development, operation and maintenance methods. Micro-service to a new architecture design model, affecting the Internet application from design to operation and maintenance of the entire process methodology changes. And Docker as the representative of the container technology for the micro-service concept to provide a matching mechanism to achieve a substantial change in the new generation of application development and distribution methods.

What is a micro service architecture?

Microservices Architecture is an architectural style and design pattern that promotes the division of applications into a series of small services that focus on a single business function, running in a separate process, Clear, the use of lightweight communication mechanisms (such as HTTP / REST) ​​to communicate with each other to achieve a complete application to meet the business and user needs.

Micro service as a structural model of change, its birth is no accident. It is when the traditional service architecture in the Internet era challenges, people for the architecture model, development and operation and maintenance of a reflection of methodology. So, in-depth discussion of micro-service architecture, we first look at the more common traditional service architecture.

Traditional "monolithic architecture":

In the past 10 years, and even the increasingly popular micro-service of the moment, the vast majority of applications is still used in our more familiar with the traditional structure, called "monolithic architecture (Monolithic Architecture)" model. Such architecture systems are typically hierarchically layered, such as the presentation layer, the business logic layer, and the data layer in the most common "hierarchical architecture". And business logic can be based on more specific business responsibilities, functional modular, the formation of logical components. What needs to be mentioned here is that the "hierarchical architecture" has a logical module and component, but at the physical deployment architecture level is still a "single block", usually as a whole compiler, packaging, deployment, operation and maintenance. "Monolithic architecture" is a definition of the application architecture model from a physical deployment perspective, including a "hierarchical architecture".

"Hierarchical architecture" is a classic model in the software architecture, but also a long time to apply the actual standards of the architecture. The monolithic architecture also has its own advantages, embodied as:

  • Easy to develop: a large number of commonly used integrated development environment (IDE) and programming framework (such as Rails, Django) are around the traditional architecture under the single application design. These tools provide developers with a convenient and familiar development and debugging experience;
  • Easy to test: As the entire application is included in a process, the application can be easily started in a development and test environment. And then use the UI automation tools (such as Selenium) can be simple to achieve End-to-End test;
  • Easy to deploy: Most programming languages ​​and frameworks have a specific application package format. Deployment Simply copy a single package to the runtime environment. And this process can also be automated through existing tools.

Because of these advantages, in the early stages of the project, the monolithic architecture has a certain appeal. Developers can quickly generate application prototypes through tools and frameworks without having to spend a lot of effort on service decomposition and distributed architecture design. However, with the expansion of business and the accumulation of functions, the original simple application volume will quickly become larger, this time a single block structure is difficult to adapt to the rapid changes in demand, due to the limitations of the architecture level, such applications will face multiple challenges.

  • Low development efficiency: With the increase in application complexity, fewer and fewer developers have a deep understanding of the application. New features development and defect repair are geometrically increasing. The correctness of the code changes can not be guaranteed. And a huge code base needs a larger development team to maintain, virtually added to the management, communication and coordination costs. In addition, the newly added team members need to spend a lot of time and effort to become familiar with a complex code base.
  • Long delivery cycle: In the single process of a single block structure, any minor changes need to re-compile, integrate, test and deploy the entire application. As the application volume increases, the delivery process and the feedback cycle are correspondingly longer, and the cost of the application release increases. So the application delivery cycle slows down, resulting in increased accumulation of gaps in the gap, resulting in greater pressure on the next delivery, the formation of a vicious circle.
  • Difficulty in technology transformation: a single process, a single block means that the center of the technical selection. For example, the application of different logical set-up usually requires a relatively uniform programming language, framework and technology stack. Which have been finalized at the initial stage of the project. After that, even if the application of the new logic components, it is difficult to use a different technology stack. And when the application reaches a certain size, the global technology stack update will face a high risk. Therefore, once the application of single-block structure, it is difficult to enjoy the industry technology changes, the development of the dividend brought about.

Because of these structural and systemic problems, the application of single-block architecture is becoming more and more difficult to adapt to the rapidly changing market demand in the Internet age. Micro-service is from the architectural level, to promote the traditional application development, operation and maintenance methods of change, so as to help enterprises quickly respond to market demand, rapid iteration, fast delivery, in the Internet era to remain competitive.

Advantages of micro service architecture:

In the micro-service architecture, we will be a single application in accordance with the functional boundaries into a series of independent, focused micro-service. Each micro service corresponds to a component in a traditional application, but can be compiled, deployed, and extended independently. Relative to the monolithic architecture, micro services have the following advantages:

  • Complexity control: in the application of decomposition at the same time, to avoid the original complexity of the endless accumulation. Each micro service focuses on a single function and expresses the service boundary clearly through a well-defined interface. Due to the small size and low complexity, each micro service can be fully controlled by a small-scale development team, easy to maintain high maintainability and development efficiency;
  • Independent deployment: Since the micro service has an independent running process, each micro service can also be deployed independently. When a micro service changes, there is no need to compile and deploy the entire application. The application of micro services is equivalent to having a series of parallel distribution processes that make publishing more efficient while reducing the risk to the production environment and ultimately shortening the application lead time.
  • Technical selection flexibility: micro-service structure, the technical selection is to the center of the. Each team can choose the most suitable technology stack according to the needs of its own services and the status quo of industry development. Since each micro service is relatively simple, it is also possible to completely refactor a micro service when it is necessary to upgrade the technology stack to a lower risk.
  • Fault Tolerance: When a building fails, in the traditional process of a single process, the failure is likely to spread in the process, the formation of the application of the overall situation is not available. In the micro service architecture, the fault will be isolated in a single service. If the design is good, other services can be retried, smooth degradation and other mechanisms to achieve application-level fault tolerance.
  • Extensions: Monolithic architecture applications can also be scaled horizontally to copy the entire application to a different node. When there are differences in the different requirements of the application's different components, the microservice architecture reflects its flexibility, since each service can be expanded independently of the actual needs.

Cloud Practice of Micro Service Architecture

While the microservice architecture offers a number of advantages, it must be acknowledged that it is not easy to build, deploy, and maintain a distributed micro service system. The container provides a lightweight, application-oriented virtualized operating environment for the micro-service provides the ideal carrier. Similarly, cloud technology based on container technology will greatly simplify the whole process of creating, integrating, deploying, and maintaining the entire process of containerized micro services, thus driving the micro-service in the cloud. The following will be Lingai Yun, for example, to illustrate the practice of each process:

create

create.png

The mirror building and continuous integration services help users to package independent, reusable micro services into a container image that can be deployed at any time. Assuming that the user's micro service program is stored in a code hosting service such as GitHub, the user can build the code store into a container image and save it in a mirror store, which can be deployed to our Container cloud platform. At the same time, Lingsi Yun provides a continuous integration of the function, the user can choose whether to use sex. Whenever there is a change in the code for the micro service, a new container image is built for later deployment.

integrated

integrate.png

Sparkling Clouds not only emits a large number of high-quality mirrors from the Docker official and community in the mirror repository of the platform, but also supports any source of mirroring beyond the platform. Users can freely combine, reuse tens of thousands of containerized micro services, like building blocks as easily integrated applications. For example, users need a common MySQL database service, he does not need to build a mirror, you can directly in the mirror community to select the appropriate database service image, and its micro-service link.

deploy
deploy.png
Micro-service due to the large number of components, cloud deployment has become a practical difficulty. The user does not need to specify the cumbersome steps in the traditional deployment mode, simply provide container mirroring and simple container configuration, the platform will automate the entire deployment process.
Alauda-compose.png
Spirits are also compatible with docker-compose, enabling a key deployment for a complete application of multiple micro service containers.

Operation and maintenance

Micro-service As a result of the many independent processes, after the deployment of operation and maintenance, management has become another practical difficulty. Ling bird completely shield the underlying cloud host and infrastructure operation and maintenance, allowing users to focus on the application. At the same time, Ling-feng cloud through the container layout, automatic repair, automatic expansion, monitoring log and other advanced application lifecycle services, to achieve the containerized micro-service intelligent hosting, and further help users reduce operation and maintenance costs and difficulty.

The internet

Micro-service architecture under the communication between the components of the network have a higher demand, especially in cloud practice, the micro-service components of the physical location is dynamic, and not subject to application control. Lingaiyun provides a complete container network solution that supports load balancing, service discovery, cross-host association, and application security intranet to ensure the availability and security of micro services to internal and external networks.
network1.png

  1. First of all, to achieve high availability of services, load balancers are essential, Lingaiyun support based on the transport layer and application layer load balancing to meet the different needs of users.
  2. Load balancing can also achieve service discovery, cloud deployment services, the physical location of each component deployment is likely to change. When the user creates a micro service, regardless of whether the service is in a stopped state or a running state, we create a load balancer and a domain name for the service so that other services can access the service through the domain name. Even if the container instance in the service is migrated, the system will mount it back to the original load balancer after it is restarted.
  3. Inter-host association means that instances of micro-service containers are deployed on different cloud hosts but are linked to the load balancer of the service to serve requests from intranets or external networks.
  4. Internal service address, for many micro-service applications, this is a very important function, such as in an application, a micro-service needs to access a cache server (such as memcached), but for security reasons, do not want external request access To this cache server, you can use the internal service address. The system also creates load balancing, as well as domain names, but this domain name is only accessible to other users of the service, external applications, or other user services that are inaccessible.
  5. Dedicated IP is a new feature recently added by some people. Some users, because of special needs, do not want to share IP with other users, can apply for a dedicated IP and bind them to their application for better isolation The

storage

Micro-service to promote diversity (Polyglot Persistence), the application of each micro-service can be selected according to the actual needs of the most appropriate data services. Micro service is generally divided into two categories, stateless services and stateful services, stateless services such as application servers, they usually do not save the data, to facilitate the horizontal expansion. Stateful services need to store data, such as database services, cache services.
storage.png
Docker's characteristics, determines the data of the container itself is not persistent, you need to load the Volume to achieve the data storage. Lingyun cloud will be persistent cloud storage abstracted into data volumes, can be mounted directly on the container, and in the container restart, the migration automatically re-mount. Can support any containerized data services for micro-service application integration. At the same time, support for micro service data backup, recovery, and download, you can use the backup at any time to restore data.

The birth of micro-service architecture and the prevalence of container technology, almost simultaneously, is not accidental. This is the Internet era against the traditional technology and architecture of the changes, the front-line developers and their Internet companies first felt the change. Lingxun hope to work together with developers to lead this change to help Internet companies really focus on their own core business, and in technology and architecture to stay ahead.

About the Author:

Chen Kai, in 2015, officially joined Ling bird, as chief technology officer. With its ten years of large-scale, enterprise-class distributed system / cloud platform research and development experience, build container technology, developer-oriented cloud computing platform. Prior to joining Skylark Technology, in 2004, Microsoft worked as a Windows operating system kernel (Kernel), and in 2010, he was the chief architect / software development manager for Windows Azure, Microsoft Cloud Platform, specializing in cloud computing / distributed system development, Led the team to develop Azure's core management system (Fabric Controller), management and support the entire cloud platform back-end, carrying millions of scale applications.

    Heads up! This alert needs your attention, but it's not super important.