How is the container that is better than all the financial industry?
Complex infrastructure IT architecture is the status of traditional finance, how to quickly respond to user needs, speed up new business on-line speed, shorten the product iteration cycle?
Several people cloud in the container landing financial cloud 2 years of practice, to achieve financial core business technology WebLogic, J2EE, Oracle middleware container production standards, has been in the stock exchange, joint-stock banks to take root. Service planning, service discovery, continuous integration, large data container, high-performance container environment, and many other industries to provide reference for the implementation of standards, and truly build a dynamic and flexible financial IT. The following is the founder and CEO of several people cloud Wang Pu in 2016 @ Container Container Technology Conference Shanghai station delivered a keynote speech Record share:
- Dangdang Department of architecture Zhang Liang: from code farmers to Daniel, technology and mood of the double upgrade
- Swan, another long-running scheduler (transferred to Mesos / Marathon enthusiasts)
- curious! LinkedIn's SRE culture is how to make it
- SRE: incomplete guide to cultural legend?
- Meet the opening of the application | DevOps & SRE beyond the traditional operation and maintenance of the Road
- Sharing of micro - service access to security design
Troubled financial industry three problems
First of all we look at three issues, these three problems not only plagued the financial industry, many traditional industries are also facing these challenges.
First, the new application on-line speed has been reduced from the month to the day, how to quickly respond to user needs? New applications on the line of high demand, in the country, the Internet industry is developing very rapidly, the Internet to the traditional industry has brought a great impact and impact. Many of the traditional industries also need to quickly go online, which they have the IT architecture has made a lot of requirements. Second, how can new technologies be delivered in a standardized way? This problem is also very typical, is a lot of traditional industries will encounter problems. How to choose the new technology, how to land, how to deliver? Finally, spike, red envelopes and other high concurrent application growth, how to deal with sudden growth in flexible applications? Spike, red envelopes such a high concurrent business is very characteristic of the Internet, financial institutions such a traditional business to how to deal with?
One of the three problems is a common reaction of the phenomenon, the traditional industry business model has changed. As we all know, the financial industry has a large number of services for individual users, 2C scene and the combination of the Internet is an irreversible trend, and it is 2C service Internet, online, led to changes in the financial industry business. Today, many of the financial industry's business itself has the characteristics of the Internet business, the Internet scene, which requires the financial industry to combine the Internet scene to solve new business problems. At the same time, the need for secure control of information technology, but also to the financial institutions of IT architecture to bring new challenges.
Financial industry IT status quo
The first is very different from other industries, that is, compliance is a red line, zero accident is required. China Banking Regulatory Commission, the CIRC, the SFC has a lot of requirements for the financial industry, many rules are can not touch the red line. The stability of the financial industry is very demanding.
Second, the Internet scene business is facing high pressure. This is also the financial institutions in the traditional business did not face the challenge. Traditional business is characterized by a stable peak, the working hours of the day there is a certain peak, will reach a certain peak, and in the evening and down, the next day's peak and the day before the peak is very close, while the Internet scene business Is unpredictable and unpredictable.
Third, the application has been rapid deployment difficult to achieve, the slow upgrade process. This is also the financial industry has the business characteristics of the decision, due to stability overrides everything, which requires a comprehensive test, all-round integration and so on, which delayed the speed of the line. The financial industry guarantees stability by reducing the speed of business on-line, just the opposite of the Internet company's approach.
Fourth, multiple sets of environments are isolated from each other, and the test environment is extremely time-consuming. The financial environment of the financial industry is isolated from each other, for example, the bank's development, testing, production at least three environments, the three are basically physically isolated. And the physical isolation of the environment, it led to the difficulty of setting up the test environment, it is difficult to reproduce the production process. I was in Google, Google only had a set of production environments, development, testing and production gathered in a large data center. To Google as the representative of the Internet company's development, testing, production environment is not physically isolated, three environments mixed structures, so, test, reproduce the entire production environment becomes very convenient. However, due to compliance requirements, the financial industry is impossible.
Fifth, the big version of the upgrade can not be rolled back. This is related to the isolation of each of the surroundings. Because the environment is very complex, financial institutions are difficult to roll back, because every time on the line will have to modify the existing environment, rollback need to revoke the previous changes, so roll back in the financial industry is also very difficult to achieve.
Sixth, all kinds of heterogeneous equipment, hardware resource utilization is very low. The last point can be described as a historical burden of the financial industry, financial institutions in a variety of heterogeneous equipment is very much. Ten years ago, many financial institutions have used a large number of mainframe, minicomputers, these devices have been in use ever since. In addition, the resource utilization of these devices is not very high. Because the traditional business does not have the characteristics of sudden, very regular, such as the day is the peak, in the evening is the trough, the use of the evening time can also run a variety of bulk business. In addition, many of the financial industry business is bundled with the hardware, many business applications are static deployment, each business is supported by a specific hardware. Google is not the case, Google will not put a specific application installed on a server, business applications and strong binding server for Google this level of data center maintenance is too difficult. Google has more than two million servers, if the business applications and the server must be strong binding, then the operation and maintenance personnel on the line, it is necessary to remember what applications on each server, it is obviously impossible. But the financial institutions of the data center size is not as big as Google, so be able to do business applications and hardware strong binding. But strong binding means that resource utilization is not high, because the business can not be 7 * 24 hours are busy, in the idle time, the computing resources can not be fully utilized.
These are the financial industry IT status of some of the introduction, can not be very comprehensive, which is exposed to the performance of several financial customers, especially they and Internet companies are very different places.
Financial industry IT development new demand
Here are three aspects – new capacity, new speed, new efficiency, summed up the financial industry IT development of some new needs.
First, the new capacity, capacity refers to the capacity of the business. The financial industry, the scale of business has undergone great changes, red envelopes, spike such business needs the ability to instantaneous horizontal expansion, the financial industry needs second-tier horizontal expansion capacity to carry the red packets and other sudden traffic. At the same time the financial industry also need to shield the underlying heterogeneous, to achieve a seamless hybrid cloud deployment.
Another new speed, the Internet fast business iteration to the traditional industry has brought a great impact, the traditional industry is also constantly improve their business iteration speed. In the premise of ensuring stability, as the Internet company to do every month or every week can have a new version of the iteration, which for the financial industry is very difficult. Therefore, the financial industry needs to achieve no manual operation, from code to online environment for continuous integration, the on-line time is reduced to the hour level. Financial institutions need to flexibly provide a true test and development environment, and through the gray release, A / B test to reduce the risk of rapid release.
Another is the new efficiency, the financial industry needs to improve the utilization rate of traditional physical machine resources 2-3 times, the bottom of the small-scale error to achieve automatic fault tolerance, but also effective management of different infrastructure on a number of clusters, so that they are not subject to Business scale expansion impact.
Financial industry IT expectations
These three points are both financial industry IT development challenges, but also demand, which is our simple summary of the expectations of the financial industry IT. As mentioned earlier, the financial industry business has undergone great changes, 2C business more and more with the characteristics of the Internet. Therefore, the business needs to support 2C-related Internet scenarios need to be integrated as much as possible, that is, from the demand to the development, testing, release on-line, to follow-up operation and maintenance, monitoring, etc., all the processes should be as much as possible Use a process.
A unified process can make the life cycle of the entire application smooth, and this is Docker technology brings great convenience. Docker shielded the environment of the heterogeneous, so that the development of written procedures to test the same can run up, test run the program and then to the production environment is also applicable, so integrated, smooth process through the application of the entire life cycle The
Here are two or two specific needs, such as how to test the environment based on container technology to quickly build a variety of test environment; in the test when the rapid development of components based on container technology, finished but also can quickly recover. These are still expected, is indeed a big blueprint, the financial industry at this stage can not achieve such a smooth process, but the entire financial industry are moving in this direction, development, testing and operation and maintenance departments are actively embracing Docker Technology, embrace container technology to upgrade their IT architecture to upgrade.
Container technology for the financial industry to create a smooth, integrated IT system, at the same time, it will also bring a lot of changes to the existing IT architecture. Let's see how the container corresponds to what the financial institution has.
With an analogy point of view, many of the financial industry customers existing enterprise-class IT architecture is mostly based on Java, is the right of this architecture. The bottom is the resource layer, before the right is based on IBM, HP these high-end hardware, like mainframes, minicomputers. The left is the use of cloud architecture, the more are partial X86, PC server, based on X86 to do virtualization or the use of private cloud, public cloud services, which is the resources of this layer. And then correspond to the middle of this layer, after a large number of financial industry based on Java middleware, like Weblogic, WebSphere. Middleware to provide a standard Java operating environment, with J2EE and so on the development of the Jar package will go to the middleware. Especially Java middleware, including Weblogic, WebSphere is the standard Java program running environment. Corresponding to the cloud side, based on the container technology, data center operating system, which is the cloud computing PaaS platform, is the cloud of middleware, so it should provide a standard application operating environment. These applications are now mostly containerized applications. Middleware this layer to provide a standard operating environment, the previous Weblogic, WebSphere and other Java middleware to provide standard Java program operating environment, and the left of the PaaS platform will have to provide a standard container application operating environment. And then a layer on the business package, business application development layer, the traditional enterprise IT are using Java, J2EE, and now we are more to use the container to package. The container is not a simple programming language, more is the application of the package, the container which can be a variety of applications, Java, C ++ or PHP. Encapsulation of the application, J2EE is packaged in the form of Jar package, and to the cloud era we are using Docker package, so that into a container form. Business packaging and then up the floor is the business structure, the traditional enterprise IT more SOA architecture, to the cloud computing era, the use of container technology, we began to transition to micro-service architecture. Micro-service and SOA architecture in essence is the same strain. First SOA architecture is service-oriented, micro-service is also service-oriented, but micro-service for the fine-grained service becomes smaller. Micro service for each service are to carry out the development, maintenance, on-line. This is not the same as the traditional SOA, SOA is more of the development level of different business logic abstracted into different services, and then different services assigned to a different team for development, the last overall on the line. And micro-service even on the line are fragmented, different micro-service to do the operation and maintenance, which is the business framework level. The top is the development and operation and maintenance organization level, the traditional enterprise development and operation is separated, the cloud of the development and operation and maintenance to be continuous integration, DevOps. In fact, continuous integration, DevOps or re-popular agile development, the most fundamental is the integration of the development and operation of the integration, which involves a lot of organizational structure level adjustment. This involves personnel, organizational aspects of the adjustment, which is the IT architecture adjustment is different, is a very complex change.
Remodeling a new generation of enterprise-level IT based on cloud computing is not just a change in technology, but also a change in organizational structure. Which will include the development and operation and maintenance of the way of collaboration, multi-sectoral integration, functional division of change and so on. In Google, the development probably can reach about twenty thousand, while the operation and maintenance staff also one or two thousand, the number is very small. But the number of Google's operation and maintenance personnel management server is very large, millions of servers all by the operation and maintenance to control. Google's operation and maintenance department and the financial industry operation and maintenance personnel to do is not the same thing. Google's operation and maintenance personnel to do more is the planning of resources, as well as the development process of the statute. Google's operation and maintenance of many of the traditional industry operation and maintenance to do things to the development, for example, business on the line, Google's operation and maintenance personnel is regardless of the development.
Monitoring, management, control
Agile development is definitely not a formal thing, it will have a deep organizational structure and functional changes. This PPT introduces how to understand cloud-based IT architectures from traditional enterprise IT perspectives. It contains three parts, the monitoring part, the management part and the control part. The middle through the CMDB configuration management database to connect several modules together, this picture for the traditional enterprise IT industry is easier to understand.
System centralized monitoring has many levels, including the equipment room monitoring, topology monitoring and so on. Automatic operation of the platform, including the task on the line, the authority of the management, etc., the following organic room, network and other systems of different operations, the two modules for many financial industry data center colleagues will not feel strange, their daily work every day Just inside these two parts. Monitoring and automation operations are called manipulation, and the above is part of the management. Management part of the more process of things, such as data center operation and management scheduling out how to deal with how to change how to deal with, how to deal with the release, how to manage the allocation and so on. Management is part of the entire data center extension.
So, how does the container cloud work with existing data center operations? Several people cloud more from the operation of the cut into the automation. Because at the management level, financial firms are in line with the compliance red line, the management process is not changed in the short term. Several people think of the cloud is landing, that is how to use the container of this new technology to quickly help financial customers. Therefore, we are more from the point of control to fall into, because from this level will not affect the financial customers have the existing management process. Many of the operations based on the container cloud will become very convenient, such as the rapid deployment of applications, fast on-line, task management, and the management of quota resources will become very convenient management, which are part of the automated control. But only to the application of the fast on-line, flexible deployment of these is not enough, because the production process also need a lot of monitoring, so we will container cloud and customer monitoring platform has been docked, so that monitoring, log, alarm and so on customers The existing process to deal with. Several people cloud from this point cut to help the data center operation and maintenance control become more automated, reduce the complexity of operation and maintenance.
The most important thing is not to destroy, do not change the upper management process, which is the perspective of several people cut into the clouds. But as mentioned above, the future if you really want to be completely based on the cloud, to achieve agile development, DevOps, then the organizational structure of the enterprise, as well as the management of the adjustment is certainly not to avoid. We as a container technology manufacturers, more from the technical point of view to consider the floor, so we are mainly from the automation of landing to cut into the control.
Down a brief introduction to the container cloud in the financial industry floor part of the scene.
The first scene is the flexibility of the expansion of the scene, such as spike, red envelopes such a scene, they have the flexibility to expand the demand. So that the application has the flexibility to have the ability to expand and shrink elasticity will be very good to enhance the data center resource utilization. When a business is very computationally, it can flexibly expand the business application and take up more computing resources. And when the scale of this business down, the background of the business applications can also be a corresponding shrink back, the release of computing resources to other applications, so that business applications with flexibility, the ability to expand, which is to respond to a change in business capacity The
Elastic expansion containers, with Docker to do is very convenient. Such as monitoring the delay of the network or other business-related indicators to monitor the speed of the business interface. When this business indicator discovers that the network delay is increased, the network delay of a service increases, or the number of requests for a service reaches a certain threshold, and the relational logic of automatic expansion begins. Automatic expansion for Docker is very convenient, in fact, is to increase the number of Docker application examples. This refers to the web instance, each web instance encapsulated in the Docker container inside, need to expand the time with the scheduling platform to the container instance of the open, you can quickly expand the application of the instance. At the same time, for the resource level, if the enterprise below to do a layer of private cloud IaaS management, then the container cloud can dispatch IaaS interface, scheduling OpenStack or VMware, generate more virtual machine request more computing resources , And then calculate the resources on the container allocation and scheduling in the past. Flexible expansion is actually a good understanding, that is, scheduling more examples.
The second scene, relatively complex, corresponds to the new speed, business applications from code to production, do continuous integration, continuous delivery. Where is the complex place, the first different environment needs to use Docker to get through, this is Docker is very good at the place. Development and testing environment is relatively easy to get through, is reachable on the network. But the test and production environment is more difficult to get through, the network is generally unreachable, which requires the transfer of things to be more standard. Therefore, from the test to the production environment, passing the best is the Docker application.
The development process is the same, Docker does not help to the efficiency of development. In other words, how to write code before, how to do the code to review these relations with Docker is not large. But after the Docker, after entering the code warehouse, packaged from the code warehouse, you can automatically build a new program. Such as Java programs to build Jar package, and then build the mirror, these images can be automatically pushed from the development environment to the mirror warehouse, and then from the mirror to the test environment, so that the two environments can be easier to get through. However, there will be some mirroring in the mirror repository that can not pass the test, which requires the return to the developer to continue the business iteration, do the Docker image, the test is completely passed, and then saved to the mirror repository, marking the latest complete Test the business application by mirroring. In the operation and maintenance personnel to do the deployment of production will involve a lot of links, the middle of the physical network may be unreasonable, operation and maintenance personnel from the test link to take Docker mirror to the production and delivery are waiting to be opened up the link.
Another point, Docker is to rely on the application of the environment and the application itself, assuming that the Docker application is installed inside the Weblogic, run Java write War package procedures, then the Weblogic also need the container inside the basic environment, assuming a Ubuntu Linux, as well as a variety of configuration files, based on xml configuration files. There are many different ways to deal with War packs and configuration files when Docker does the delivery. From the development of the most convenient way to test, Docker is all the things inside the package into the program and the configuration file together into the way, through this method, Docker mirror is completely self-reliance, there will be trouble Episodes, such as the program changed a line to re-do a War package, re-packaged a Docker mirror; or configuration file to change a place, the entire mirror also need to re-packaged. Enterprise-class IT applications have a variety of dependencies, so the entire package process is not necessarily finished in a few seconds. At this time, the relatively constant part is Ubuntu and Weblogic, which are partial dependencies, so you can put them inside the Docker container as a base image. And then whenever the application is released, War package changes the most frequently, but the program and the mirror can be separated. So that each time on the line, the basic mirror remains the same, the new application can reuse the existing Docker basic mirror, only replace the War package. In this case, you can still use the Docker to bring some isolation, resource constraints and other lightweight deployment features.
The other is for the management of the configuration, because the configuration mentioned above or on the Docker mirror inside the. Configuration files are generally not large, although not as much as the program changed, but the configuration file will change. So is it not every time you modify the configuration file, the entire Docker mirror to re-change it? Not necessarily, we can manage the configuration file separately. The management of the configuration file is not easy for the financial industry, because the environment is isolated. Configure the server to the different environment configuration are generated, when the program is running up, there are two ways, one is the way to pull, the container starts to configure the center to take real-time configuration, no need to modify the code. The other is the push mode, configuration updates will be pushed to a specific container in real time, you need to use the SDK. Pull the mode, after the start of the program, each time the program must be issued once the static load configuration, configuration changes will not change the program, pull the model is relatively easy to achieve.
Another scene is from the new efficiency, to enhance the efficiency of the entire data center operation and maintenance, the operation and maintenance of the complexity down. The use of the container cloud can be 80% of the repeatable operation and maintenance work to be automated. Operation and maintenance deployment is not necessary to participate in the human, only the operation and maintenance personnel to trigger, set the application on-line time, the specific on-line logic is based on the container to the rapid deployment. Basically, only the new physical server on the line, or components of the virtual machine resource pool when the need for human resources, container clouds under the cluster can be automatically built based on container technology automatically. CPU, memory can be automatically allocated and recycled. As well as the application of horizontal expansion, the application of fault-tolerant automatic recovery can also be automatically achieved. By this, 80% of the repetitive operation and maintenance will become automated, which is the data center operation and maintenance efficiency is undoubtedly a big upgrade.
The case of several people cloud
A simple framework for a data center operating system based on a container is a lightweight PaaS platform for private clouds or hybrid clouds. The idea of this PaaS platform is very simple, that is, a variety of applications, based on the application of the Internet, based on the traditional architecture of the application, or distributed open source of some of the components, the message queue of these various components , They are unified abstract for the container application. For these granular applications, the PaaS platform can provide a standard container operating environment, including application deployment, continuous integration, resiliency, service discovery, logging, permissions, and docking on persistence and networking. This is the standard PaaS platform, which thanks to the container technology will be applied to the standardization of the application layer, on this basis, all applications are containerized applications, no longer distinguish between business applications or component-level applications, or Is the application of large data processing, they are the application of the container, PaaS platform only need to run the container application needs to manage it.
PaaS platform down a variety of computing resources, including public or private cloud, or physical management, unified management, several people cloud more focused on the private cloud scene. Through the lightweight PaaS platform, based on the physical machine and virtual machine can have a unified management platform, from the application of the rapid release to the entire resource utilization, and finally to large-scale deployment, are integrated process , Several people PaaS platform can support all.
Give an example of a spike, which is one of our customers, activities around ten o'clock at night, because there are too many people worry about the daytime. This is indeed the customer's dilemma, because their IT architecture is difficult to adapt to flexible expansion, so forced to do spike business in the evening. Before we have done millions of concurrent pressure test, every second there are one million requests over. Pressure up after we began to do flexible expansion, for this, the Docker container is very convenient, in addition to monitoring the automatic expansion of the trigger.
The second example is the city disaster recovery, the financial industry out of compliance requirements, to do two places three centers, this is not so easy to achieve. Based on the container cloud can be achieved three centers, the container management nodes are cross-network, high availability, they are several backup each other, the following is a different cluster, may be production clusters, backup clusters, development clusters, etc. All kinds of clusters, these cross-physical nodes of the cluster through several human cloud nodes to manage. When a cluster is dang, many of the above applications can be automatically and quickly migrated over, for example, the production of Dang immediately spare on the cut, based on the container which can be easily achieved.
Another simple example, just mentioned, large data system if the container after the need, then do not need to distinguish between large data applications or other business applications, are all containerized applications. So the system of large data running in the container inside the package are all containerized, including Kafka, ZooKeeper, Redis. After the container, PaaS platform does not distinguish what this application is, all based on the container, just the container management is good, that is, the container needs CPU to the CPU, the need to allocate memory on the memory, the need for network Distribution network, need to isolate PaaS platform to help it do isolation, so that the entire large data platform can be easily maintained. Application systems, data systems can be unified through a PaaS platform to operation and maintenance.