Use Git sub-module and Docker Compose to achieve efficient development workflow

In this paper, the use of Git sub-module and Docker Compose to achieve efficient development workflow, so that programmers can easily build a development environment, the energy into the need to develop the application itself.

problem

Since we hired the first remote developer from Continuous Software, we realized the importance of streamlining the development workflow. When the newly recruited programmers take over the complex projects of many applications, we want to avoid the following questions:

  • Missing stack modules: Node.js, PHP, PostgreSQL and so on
  • The overall profile of the project component / application is not clear
  • Local configuration conflicts: listening port, database configuration, and so on

In addition, for my personal experience, we programmers too easy to find the North. Once, I entered a company's first day spent on the development environment, trying to understand how all the things to work together, and can not directly understand the company's application development in the end is how to work.

Program

Before giving a detailed introduction to how to solve the above problems, I will introduce the development workflow under our project.

Each of our projects has its own Team on Bitbucket (corresponding to the Organization on GitHub). Each application will create a Repository (for example, api , dashboard , cpanel ) under Team. On top of these sub-modules, create a Repository called development . The same level of the sub-module is README.md and docker-compose.yml two files.

  Kytwb @ continuous: ~ / path / to / <project> / $ ls -la 
Total 40
Drwxrwxr-x 11 kytwb amine 4096 Mar 14 16:30.
Drwxr-xr-x 4 kytwb amine 4096 Nov 1 20:17 ..
Drwxr-xr-x 20 kytwb amine 4096 Mar 11 14:24 api
Drwxr-xr-x 11 kytwb amine 4096 Mar 3 13:21 cpanel
Drwxr-xr-x 10 kytwb amine 4096 Mar 12 11:37 dashboard
-rw-r-r-- 1 kytwb amine 2302 Mar 2 15:28 docker-compose.yml
Drwxrwxr-x 9 kytwb amine 4096 Mar 14 16:30 .git
-rw-r-r-- 1 kytwb amine 648 Dec 22 17:20 .gitmodules
-rw-r-r-- 1 kytwb amine 1706 Dec 17 16:41 README.md

When a new programmer joins a project, simply browse the development repository on Bitbucket and set up the environment quickly according to the steps of README.md . The steps are as follows:

  $ Git -v 
$ Docker -v
$ Docker-compose -v
$ Git clone git@bitbucket.com: <project> /development.git <project> && cd <project>
$ Git submodule init && git submodule update
$ Git submodule foreach npm install
$ Docker-compose up -d

At this point, everything has been built and run on the local machine.

Implementation principle

This chapter describes how we implement the above workflow.

Prerequisites

  $ Git -v 
$ Docker -v
$ Docker-compose

Since our development stack is based entirely on Docker, the programmer needs to install Docker first. At this point they do not need to be particularly familiar with Docker, only need to use Docker in the development, we indirectly introduced them to the container world, then as a bridge to explain to them how to use Docker to achieve continuous integration, continuous delivery and so on The README.md does not elaborate on how to install Docker , because the installation is very simple.

When docker-compose is called Fig , we have used it to arrange the containers in the development stack. After that, Docker acquired Fig, renamed Docker Compose. It was suggested that Docker Compose should be incorporated into the Docker code, but for the most reason that did not do so, Docker Compose still needed to be installed separately.

Similarly, this article does not detail the installation of Docker Compose , because it is very simple.

Repository

As mentioned earlier, you need to create a development repository and create a corresponding repository for each application. Here we created api , dashboard and cpanel . When creating these warehouses, focus on the development warehouse.

  $ Git clone git@bitbucket.com: <project> /development.git <project> && cd <project> 

Now add the application's repository as a sub-module for the development repository by typing the following command:

  $ Git submodule add git@bitbucket.org: <project> /api.git 
$ Git submodule add git@bitbucket.org: <project> /dashboard.git
$ Git submodule add git@bitbucket.org: <project> /cpanel.git

In this way, you will create a .gitmodules file in the root directory of your development . The programmer can also get all the applications and run them once in the clone development repository:

  $ Git submodule init && git submodule update 

For more information on sub-modules, please refer to Git's official documentation .

Docker everything

Now we have built a good development warehouse, you can access the way through the cd all the different applications. Next we use the previously mentioned choreographer: Docker Compose to container all the applications and their configuration.

Start with api applications first. Open docker-compose.yml , declare a container for the API, and select the base mirror for the container. The code in this example is based on Node.js, so choose the official Node.js image:

  Api: 
Image: dockerfile / nodejs

At this point, the run command docker-compose up -d creates a container named &lt;project>_api_1 , which does nothing (exit immediately after startup). Run the command docker-compose ps to get information about all containers organized by docker-compose.yml .

Next, configure the api container to make it more functional. To achieve this, we need to:

  • Mount the source code into the container
  • Declare what command to run the application
  • Exposure to the appropriate port for access to the application

This configuration file is similar:

  Api: 
Image: dockerfile / nodejs
Volumes:
- ./api/:/app/
Working_dir: / app /
Command: npm start
Ports:
- "8000: 8000"

Now run the docker-compose up -d to start the api application and can access it at http://localhost:8000 . This program may crash and you can check the container log using docker-compose logs api .

Here, I suspect that the crash of the api is because it is not on the database. So you need to add the database container and let the api container use it.

  Api: 
Image: dockerfile / nodejs
Volumes:
- ./api/:/app/
Working_dir: / app /
Command: npm start
Ports:
- "8000: 8000"
Links:
- database
Database:
Image: postgresql
Ports:
- "5432: 5432"

By creating the database container and connecting it to the api container, we can find the database in the api container. To display the API environment (for example, console.log(process.env) ), you must use the following variables, such as POSTGRES_1_PORT_5432_TCP_ADDR and POSTGRES_1_PORT_5432_TCP_PORT . This is the variable that we use in the API's configuration file that is associated with the database.

With the link directive, this database container is considered a dependency on the API container. This means that Docker Compose must start the database container before starting the API container.
Now we describe the other applications in the same way. Here, we can connect the api to the dashboard and cpanel applications via the environment variables API_1_PORT_8000_TCP_ADDR and API_1_PORT_8000_TCP_PORT .

  - ./api/:/app/ 
Working_dir: / app /
Command: npm start
Ports:
- "8000: 8000"
Links:
- database
Database:
Image: postgresql
Dashboard:
Image: dockerfile / nodejs
Volumes:
- ./dashboard/:/app/
Working_dir: / app /
Command: npm start
Ports:
- "8001: 8001"
Links:
- api
Cpanel
Image: dockerfile / nodejs
Volumes:
- ./api/:/app/
Working_dir: / app /
Command: npm start
Ports:
- "8002: 8002"
Links:
- api

Just as you would like to modify API configuration files for your database, you can use similar environment variables for dashboard and cpanel applications to avoid hard coding.

Now you can run the docker-compose up -d command again and the docker-compose ps command:

  Kytwb @ continuous: ~ / path / to / <project> $ docker-compose up -d 
Recreating <project> _database_1 ...
Recreating <project> _api_1 ...
Creating <project> _dashboard_1 ...
Creating <project> _cpanel_1 ...
Kytwb @ continuous: ~ / path / to / <project> $ docker-compose ps
Name Command State Ports
-------------------------------------------------- --------------------------------
<Project> _api_1 npm start Up 0.0.0.0:8000->8000/tcp
<Project> _dashboard_1 npm start Up 0.0.0.0:8001->8001/tcp
<Project> _cpanel_1 npm start Up 0.0.0.0:8002->8002/tcp
<Project> _database_1 / usr / local / bin / run Up 0.0.0.0:5432->5432/tcp

The application should already be up and running.
You can access api from http://localhsot:8000 .
You can access dashboard from http://localhsot:8001 .
You can access cpanel from http://localhsot:8002 .

Go further

Local route

After running all the containers using docker-compose up -d , you can access our application via http: // localhost: <application_por t> . Based on the current configuration, we can easily use jwilder/nginx-proxy plus local routing, so that you can use the same environment and the production environment to access the local application. For example, access the local version of http://api.domain.com via http: //api.domain.local .
jwilder/nginx-proxy mirroring makes everything simple. Just add a description in docker-compose.yml to create a new container called nginx . According to jwilder/nginx-proxy README file (mount Docker daemon socket, exposed port 80) configuration of the container on it. After that, add additional environment variables VIRTUAL_HOST and VIRTUAL_PORT existing container as follows:

  Api: 
Image: dockerfile / nodejs
Volumes:
- ./api/:/app/
Working_dir: / app /
Command: npm start
Environment:
- VIRTUAL_HOST = api.domain.local
- VIRTUAL_PORT = 8000
Ports:
- "8000: 8000"
Links:
- database
Database:
Image: postgresql
Dashboard:
Image: dockerfile / nodejs
Volumes:
- ./dashboard/:/app/
Working_dir: / app /
Command: npm start
Environment:
- VIRTUAL_HOST = dashboard.domain.local
- VIRTUAL_PORT = 8001
Ports:
- "8001: 8001"
Links:
- api
Cpanel
Image: dockerfile / nodejs
Volumes:
- ./api/:/app/
Working_dir: / app /
Command: npm start
Environment:
- VIRTUAL_HOST = cpanel.domain.local
- VIRTUAL_PORT = 8002
Ports:
- "8002: 8002"
Links:
- api
Nginx
Image: jwilder / nginx-proxy
Volumes:
- /var/run/docker.sock:/tmp/docker.sock
Ports:
- "80:80"

nginx container checks all containers running on the Docker daemon (via the mounted docker.sock file), creates the appropriate nginx configuration file for each container, and sets the VIRTUAL_HOST environment variable.

To complete the local routing structures, but also in the etc/hosts to add all the VIRTUAL_HOST . I was using node.js hostile package to complete this work, but I guess should be able to achieve automation, like jwilder/nginx-proxy can be based on nginx configuration file dynamic changes. Need to study here.

Now you can run docker-compose up -d again, and then use the same url as the production environment to access the application, just use .local TLD instead .com TLD.

Suggest

This article is published on AirPair, and if you have any suggestions for further this chapter, please fork and modify it at will. If you find any errors in this article, please help to modify.

Efficient development workflow using Git submodules and Docker Compose (translation: Cui Jingwen)
===========================
Translator introduction Cui Jingwen, now working at VMware, senior software engineer, responsible for desktop virtualization products, quality assurance work. Worked for years in the IBM WebSphere Business Process Management software. On the virtualization, middleware technology has a strong interest.

Heads up! This alert needs your attention, but it's not super important.