My broken thoughts: Docker Getting Started Guide

【Editor's Note】 Previously translated a lot of Docker introductory articles, the reason why the translation of this, because Anders is very unique point of view, thinking is also very conditioning. You can also see the author's speech "Docker, DevOps the future" . This article introduces some of the basic concepts of Docker, seductive features, Docker's working principle, routine management of basic operations, and some Docker problem solutions.
docker.png

What is Docker, what should you know?

Compared to the interpretation of many people, I believe that Docker is a lightweight virtual machine easier to understand. Another explanation is: Docker is the operating system in the chroot . If you do not know what chroot is, then the latter explanation may not help you understand what is Docker.

Chroot is an operation that can change the current running process and the root directory of the child process. The program runs in a modified environment where it can not access files and commands outside of this environment directory tree, which is the "chroot cage".

– Arch Linux wiki in the explanation of chroot

Virtual Machine vs. Docker

The following figure depicts the difference between the virtual machine and the Docker. In the VM, the host OS is a hypervisor ( hypervisor ), the top is the guest operating system, and Docker uses the Docker engine and the container. What can you understand and what is the difference between the Docker engine and the hypervisor? You can list the processes running on the host OS to understand their differences.
vm-vs-docker.png
The following simple process tree can see their differences. Although the virtual machine running a lot of the process, but the virtual machine running on the host but only one process.

  # Running processes on Host for a VM 
$ Pstree VM

- + = / virtualBox.app
| - = coreos-vagrant

And run Docker engine on the host can see all the process. The container process is running on the host OS! , They can pass ordinary ps , kill and other orders to check and maintain.

  # Docker in the process of the host 
$ Pstree docker

- + = / docker
| - = / bin / sh
| - = node server.js
| - = go run app
| - = ruby ​​server.rb
...
| - = / bin / bash

All things are transparent, what does that mean? Means that the Docker container is smaller, faster, and easier to integrate with other things than virtual machines. As shown below.
vm-vs-docker-table.png
Install CoreOS small virtual machine actually has 1.2GB, and installed busybox small container only 2.5MB. The fastest virtual machine startup time is also minutes, and the container startup time is usually less than a second. Installing the virtual machine on the same host requires the correct setup of the network, and installing the Docker is very simple.

So the container is lightweight, fast and easy to integrate, but that's not all of it!

Docker is a contract

Docker is also the "contract" between the developer and the opener. Development and operation in the choice of tools and the environment when the attitude is usually very different. Developers want to use some shiny new things, such as Node.js, Rust, Go, micro service, Cassandra, Hadoop, blablabla ……… and operation tend to use the tools used in the past , Because it turns out that the old tools are very effective.

But this is precisely the highlight of Docker, opener like it, because Docker let them only care about one thing: the deployment of the container, and the developers are also very happy, just write the code, and then fling to the container, the remaining To the operation and maintenance on the matter.
devs-loves-ops.png
But do not worry, this is not finished. Operation and maintenance can also help developers build optimized containers for local development.

Better resource utilization

Many years ago, that time was not yet virtualized, when we needed to create a new service, we had to apply for the actual physical hardware. This may take months, depending on the company's process. Once the server is in place, we create a good service, and many times it does not succeed as we would like it, because the server's CPU usage is only 5%. Too extravagant.

Then, virtualization came. It can be a few minutes to run a machine up, you can also run on the same hardware multiple virtual machines, resource utilization is not only 5% of the. However, we also need to assign a virtual machine to each service, so we still can not use this machine.

Containerization is the next step in the evolution process. Containers can be created in seconds, and can be deployed at a smaller granularity than virtual machines.

rely

matrix-from-hell.jpg
Docker starts really cool. But why do not we deploy all the services on the same machine? The reason is simple: depend on the problem. Installing multiple independent services on the same machine, whether it is a machine or a virtual machine, is a disaster. Docker's argument is: hell-like matrix dependency.

Docker solves the problem of matrix dependency by preserving dependencies in the container.

speed

roadrunner.gif
Fast is good, but it is incredible 100 times faster. Speed ​​makes a lot of things possible, adding more new possibilities. For example, can you quickly create a new environment, if you need to switch from the Clojure development environment to the Go language? Start a container Need to provide production environment DB for integration and performance testing? Start a container! Need to switch from Apache to the entire production environment to Nginx? Start the container!

How did Docker work?

Docker is a Client-Server architecture, the Docker daemon runs on the host, and then through the Socket connection from the client access, the client and the daemon can also run on the same host, but this is not required. The Docker command line client is also working similarly, but it is usually connected via a Unix domain socket instead of a TCP socket.

The daemon accepts commands from the client and manages the containers that run on the host.
client-server.png

Docker concept and interaction

  • Host, machine running container.
  • Mirroring, file hierarchy, and metadata that contains how to run the container
  • A container, a process that starts from a mirror and contains a running program
  • Registry, mirror repository
  • Volume, storage outside the container
  • Dockerfile, used to create a mirror script
    docker-interactions.png
    We can build a mirror through the Dockerfile , you can also commit a running container to create a mirror, the mirror can be marked, you can push to the Registry or pull down from the Registry, you can create or run the mirror way to start the container , Can be stop , you can also remove it by rm .

    Mirror

    Mirroring is a file structure that contains metadata about how to run a container. Each command in the Dockerfile creates a new hierarchy in the file system, and the file system is built at these levels, and the mirrors are built on these federated file systems.
    docker-image.png
    When the container is started, all mirroring is consolidated into a single process. When files in the federated file system are deleted, they are simply marked as deleted, but are still still present.

      # Commands for interacting with images 
    $ Docker images # View all mirrors.
    $ Docker import # Create a mirror from the tarball
    $ Docker build # Create a mirror from Dockerfile
    $ Docker commit # Create a mirror from the container
    $ Docker rmi # Remove the mirror
    $ Docker history # List the history of changes to the mirror

    Mirror size

    This is some of the frequently used image-related data:

  • Scratch – the base image, 0 files, the size of 0
  • Busybox – minimum Unix system, 2.5MB, 10000 files
  • Debian: jessie – Debian latest version, 122MB, 18000 files
  • Ubuntu: 14.04 – 188MB, 23000 files

Create a mirror image

You can create a mirror by using the docker commit container-id , docker import url-to-tar or docker build -f Dockerfile . .
Look at commit the way:

  # Create a mirror by commit 
$ Docker run -i-t debian: jessie bash
Root @ e6c7d21960: / # apt-get update
Root @ e6c7d21960: / # apt-get install postgresql
Root @ e6c7d21960: / # apt-get install node
Root @ e6c7d21960: / # node --version
Root @ e6c7d21960: / # curl https://iojs.org/dist/v1.2.0/iojs-v1.2.0-
Linux-x64.tar.gz -o iojs.tgz
Root @ e6c7d21960: / # tar xzf iojs.tgz
Root @ e6c7d21960: / # ls
Root @ e6c7d21960: / # cd iojs-v1.2.0-linux-x64 /
Root @ e6c7d21960: / # ls
Root @ e6c7d21960: / # cp -r * / usr / local /
Root @ e6c7d21960: / # iojs --version
1.2.0
Root @ e6c7d21960: / # exit
$ Docker ps -l -q
E6c7d21960
$ Docker commit e6c7d21960 postgres-iojs
Daeb0b76283eac2e0c7f7504bdde2d49c721a1b03a50f750ea9982464cfccb1e

From the above we can see that we can create a mirror through the docker commit , but this way a bit messy and difficult to copy, the better way is to build a mirror through Dockerfile, because it is clear and easy to copy:

  FROM debian: jessie 
# Dockerfile for postgres-iojs

RUN apt-get update
RUN apt-get install postgresql
RUN curl https://iojs.org/dist/iojs-v1.2.0.tgz -o iojs.tgz
RUN tar xzf iojs.tgz
RUN cp -r iojs-v1.2.0-linux-x64 / * / usr / local

And then use the following command to build:

  $ Docker build -tag postgres-iojs. 

Each of the commands in Dockerfile creates a new layer, which usually puts similar commands together and combines the commands with && and continuation symbols:

  FROM debian: jessie 
# Dockerfile for postgres-iojs

RUN apt-get update && \
Apt-get install postgresql && \
Curl https://iojs.org/dist/iojs-v1.2.0.tgz -o iojs.tgz && \
Tar xzf iojs.tgz && \
Cp -r iojs-v1.2.0-linux-x64 / * / usr / local

The order of the commands in these lines is important because the Docker caches the mirror in order to speed up the building of the mirror. When organizing the order of the Dockerfile, note that the frequently changing rows are placed at the bottom of the file. When the associated file in the cache changes, the mirror reruns, even if the rows in the Dockerfile do not change.

Command in Dockerfile

Dockerfile supports 13 commands, some of which are used to build the mirror, and some are used to run the container from the mirror, which is a table about when the command was used:
dockerfile-commands.png

BUILD command:

  • FROM – the new mirror is based on which mirror
  • MAINTAINER – Name and email address of the mirror maintainer
  • COPY – copy files and directories to the mirror
  • ADD – same as COPY, but will automatically process the URL and extract the tarball archive
  • RUN – Runs a command in the container, for example: apt-get install
  • ONBUILD – Run command when building an inherited Dockerfile
  • .dockerignore – is not a command, but it can control what files are added to the context of the build, and the mirror should contain .git and other unwanted files.

RUN command:

  • CMD – the default command when the container is run, which can be overridden by command line parameters
  • ENV – Sets the environment variables within the container
  • EXPOSE – Exposing ports from the container, you must explicitly expose the port by using the -p or -P on the host's RUN command
  • VOLUME – Specifies a storage directory after the file system. If not set via docker run -v , it will be created as /var/lib/docker/volumes
  • ENTRYPOINT – Specifies that a command will not be docker run image cmd by the docker run image cmd command. Commonly used to provide a default executable program and use the command as a parameter.

BUILD, RUN command has some commands:

  • USER – Set the user for the RUN, CMD, ENTRYPOINT commands
  • WORKDIR – sets the working directory for the RUN, CMD, ENTRYPOINT, ADD, COPY commands
    docker-image.png

Run the container

When the container is started, the process gets a new writable layer in the federated file system it can run.

Starting with version 1.5, it also allows the top layer of the layer to be set to read-only, forcing us to use the volume for all files (such as logs, temporary files).

  # Used to interact with the container 
$ Docker create # Create a container, but do not start it
$ Docker run # Create and start a container
$ Docker stop # Stop the container
$ Docker start # Start the container
$ Docker restart # Reboot the container
$ Docker rm # delete container
$ Docker kill # Send the kill signal to the container
$ Docker attach # Connect to a running container
$ Docker wait # Block until the container stops
$ Docker exec # Executes a command in the running container

Docker run

As mentioned above, docker run is the user to start a new container command, here are some common methods of running the container:
container.png

  # Interactive run container 
$ Docker run -it --rm ubuntu

This is a way to let you run the container as an ordinary terminal program. If you want to export the pipe to the container, you can use the -t option.

  • –interactive (-i) – sends the standard input to the process
  • -tty (-t) – tells the process to have a terminal connection. This function will affect the output of the program and how it handles signals such as Ctrx-C.
  • –rm – remove the mirror when exiting.
      # Run the container in the background 
    $ Docker run -d hadoop

    Docker run -env

      # Run a named container and pass it some environment variables 
    $ Docker run \
    --name mydb \
    --env MYSQL_USER = db-user \
    -e MYSQL_PASSWORD = secret \
    --env-file ./mysql.env \
    Mysql
  • –name – names the container, otherwise it is a random container
  • –env (-e) – Sets the environment variable in the container
  • – env-file – Introduces all environment variables from env-file (source env-file with Linux)
  • Mysql – Specifies that the mirror is named mysql: lastest

Docker run -publish

  # Publish container port 80 to the host's random port 
$ Docker run -p 80 nginx

# Publish container port 80 and port 8080 on host
$ Docker run -p 8080: 80 nginx

# Publish container 80 port to port 8080 of host 127.0.0.0.1
$ Docker run -p 127.0.0.1:8080:80 nginx

# Publish the exposed ports in all containers to the host's random port
$ Docker run -P nginx

Nginx mirroring, such as exposing ports 80 and 443.

  FROM debian: wheezy 
MAINTAINER NGINX "docker-maint@nginx.com"

EXPOSE 80 443

Docker run –link

  # Start the postgres container and name it mydb 
$ Docker run --name mydb postgres

# Link mydb to myqpp db
$ Docker run --link mydb: db myapp

The connection container needs to set the container to the network between the connected containers, there are two things to do:

  • Update / etc / hosts via the container's connection name. In the above example, the connection name is db, you can easily through the name db to access the container.
  • Set the environment variable for the exposed port. This seems to be nothing practical, you can also 主机名:端口 in the form of access to the corresponding port.

Docker run limits

You can also use run limits to limit the host resources that the container can use

  # Limit the memory size 
$ Docker run -m 256m yourapp

# Restrict the process can use the number of cpu shares (cpu shares) (total CPU share of 1024)
$ Docker run --cpu-shares 512 mypp

# Change the user running the process to www, not root (for security reasons)
$ Docker run-u = www nginx

Setting up 512 copies of 1024 copies does not mean that you can use half of the CPU resources, which means that in a container without any restrictions, it can use up to half the number of copies. For example, we have two containers of 1024, and a folder of 512 (1024: 1024: 512), then the 512 copies of that container, you can only get 1/5 of the total number of CPU

Docker

docker exec allows us to run the command inside the container, which is useful when debugging.

  # Use id 6f2c42c0 to run the shell inside the container 
$ Docker exec -it 6f2c42c0 sh

volume

volumes.png
The volume provides persistent storage outside the container. This means that if you submit a new image, the data will not be saved.

  # Start a new nginx container with / var / log as a volume 
$ Docker run -v / var / log nginx

If the directory does not exist, it will be automatically created as: /var/lib/docker/valumes/ec3c543bc..535

The actual directory name can be found by the command: docker inspect container-id .

  # Start a new nginx container, set the / var / log to volume, and map to the host's / tmp directory 
$ Docker run -v / tmp: / var / log nginx

You can also use the --valumes-from option to mount volumes from other containers.

  # Start the container db 
$ Docker run -v / var / lib / postgresql / data --name mydb postgres

# Start the backup container and mount the volume from the mydb container
$ Docker run --volumes-from mydb backup

Docker Registry

Docker Hub is Docker's official mirror repository, supporting private libraries and common libraries, which can be marked as official warehouses , meaning it is planned by the maintainer of the project (or the person concerned).

The Docker Hub also supports automating the construction of projects from Github and Bitbucket. If the auto-build feature is enabled, each time you submit code to the code base, the image is automatically built.

Even if you do not want to use automatic build, you can still docker push directly to the Docker Hub, Docker pull will pull the mirror down. docker run A mirror that does not exist locally, the docker pull operation is automatically started.

You can also host mirroring in any place, the official open source project with Registry , but there are many Bugs.

In addition, Quay, Tutum and Google also offer private mirror hosting services.

Check the container

There are a lot of orders to check the container:

  $ Docker ps # Display the running container 
$ Docker inspect # Display container information (including ip address)
$ Docker logs # Get the log in the container
$ Docker events # Get container events
$ Docker port # Display the public port of the container
$ Docker top # Display the processes running in the container
$ Docker diff # View the changed files in the container file system
$ Docker stats # View a variety of latitude data, memory, CPU, file system

The following details about the docker ps and docker inspect , the two commands most commonly used.

  # List all containers, including stopped. 
$ Docker ps --all
CONTAINER ID IMAGE COMMAND NAMES
9923ad197b65 busybox: latest "sh" romantic_fermat
Fe7f682cf546 debian: jessie "bash" silly_bartik
09c707e2ec07 scratch: latest "ls" suspicious_perlman
B
Fbe1f24d7df8 busybox: latest "true" db_data


# Inspect the container named silly_bartik
# Output is shortened for brevity.
$ Docker inspect silly_bartik
1 [{
2 "Args": [
3 "-c",
4 "/usr/local/bin/confd-watch.sh"
5],
6 "Config": {
10 "Hostname": "3c012df7bab9",
11 "Image": "andersjanmyr / nginx-confd: development"
12},
13 "Id": "3c012df7bab977a194199f1",
14 "Image": "d3bd1f07cae1bd624e2e",
15 "NetworkSettings": {
16 "IPAddress": "",
18 "Ports": null
19},
20 "Volumes": {},
twenty two }]

Skill tricks

Get container id. It's useful to write a script.

  # Get the id (-q) of the last (-l) run container 
# Get the last (-l) a startup container id (-q)
$ Docker ps -l -q
C8044ab1a3d0

docker inspect can be formatted with the string —- Go language template as a parameter, detailed description of the required data. Write scripts while useful.

  $ Docker inspect -f '{{.NetworkSettings.IPAddress}}' 6f2c42c05500 
172.17.0.11

Use the docker exec to interact with the container in docker exec .

  # Get container environment variables 
$ Docker exec -it 6f2c42c05500 env

PATH = / usr / local / sbin: / usr ...
HOSTNAME = 6f2c42c05500
REDIS_1_PORT = tcp: //172.17.0.9: 6379
REDIS_1_PORT_6379_TCP = tcp: //172.17.0.9: 6379
...

Through the volume to avoid every time you run the mirror reconstruction, the following is a Dockerfile, each time the building will copy the current directory to the container.

  1 FROM dockerfile / nodejs: latest 
2
3 MAINTAINER Anders Janmyr "anders@janmyr.com"
4 RUN apt-get update && \
5 apt-get install zlib1g-dev && \
6 npm install -g pm2 && \
7 mkdir -p / srv / app
8
9 WORKDIR / srv / app
10 COPY. / Srv / app
11
12 CMD pm2 start app.js -x -i 1 && pm2 logs
13

Build and run mirroring:

  $ Docker build -t myapp. 
$ Docker run -it --rm myapp

To avoid rebuilding, create a one-time image and mount the local directory at run time.

Safety

security.jpg
We may have heard that using Docker is not so safe. This is not a lie, but it is not a problem.

Currently Docker has the following security issues:

  • Mirror signature is not approved correctly.
  • If you have root privileges in the container, you have the root privileges on the host.

Security solution:

  • Use trusted images from your private warehouse
  • Try not to run the container as root
  • The root of the container as the root of the root? Or does the root directory of the container be set to the root directory in the container?

If all the containers on the server are yours, then you do not have to worry about a dangerous interaction between them.

"Select" container

I added the quotation marks to the choice of words, because there is no other choice, but many container enthusiasts want to play, such as Ubuntu LXD, Microsoft's Drawbridge, and Rocket .

Rocket by CoreOS development, CoreOS is a large container platform. The reason they developed Rocket is that Docker makes Docker bloated and has a business conflict with CoreOS.

In this new container, they try to remove the Docker flaws left for historical reasons. And through the socket activation to provide a simple container and complete security construction.
container-options.png

Choreography

When we open the application to a number of different containers, it will produce some new problems. How to make different parts of the communication? What about these containers on a single host? How are multiple hosts on the deal?

On a single host, Docker solves the problem by arranging.

To simplify the container's link operation, Docker provides a tool called docker-compose . (Formerly known as fig , developed by another company, and then recently acquired them by Docker)

Docker-compose

fig.png
docker-compose docker-compose.yml information about multiple containers in a single docker-compose.yml file. Look at an example, manage the web and redis two containers of the configuration file:

  1 web: 
2 build:.
3 command: python app.py
4 ports:
5 - "5000: 5000"
6 volumes:
7 -.: / Code
8 links:
9 - redis
10 redis:
11 image: redis

To start the container above, you can use the docker-compose up command

  $ Docker-compose up 
Pulling image orchardup ​​/ redis ...
Building web ...
Starting figtest_redis_1 ...
Starting figtest_web_1 ...
Redis_1 | [8] 02 Jan 18: 43: 35.576 # Server
Started, Redis version 2.8.3
Web_1 | * Running on http ://0.0.0.0:5000/

Can also be detached mode (detached mode) start: docker-compose up -d , and then through the docker-compose ps see what docker-compose ps container:

  $ Docker-compose up -d 
Starting figtest_redis_1 ...
Starting figtest_web_1 ...
$ Docker-compose ps
Name Command State Ports
-------------------------------------------------- ----------
Figtest_redis_1 / usr / local / bin / run Up
Figtest_web_1 / bin / sh -c python app.py Up 5000-> 5000

You can also make the command work in a container or multiple containers at the same time.

  # Get the environment variable from the web container 
$ Docker-compose run web env

# Expand to multiple containers (Scale to multiple containers)
$ Docker-compose scale web = 3 redis = 2

# Return log information from all containers
$ Docker-compose logs

As can be seen from the above command, the extension is easy, but the application must be written to support the way to handle multiple containers. Out of the container, load balancing is not supported.

Docker hosting

Many companies want to do business in the cloud hosting Docker, as shown below.
docker-hosting-providers.png
These providers try to solve different problems, from simple hosting to doing "cloud operating systems". Two of them are promising:

CoreOS

As shown in the above figure, CoreOS is a collection of services that can host multiple containers in a CoreOS cluster:
core-os.png

  • The CoreOS Linux distribution is a clipped Linux that uses 114MB of RAM at startup, no package manager, runs all programs using Docker or its own Rocket.
  • CoreOS uses Docker (or Rocket) to install the application on the host.
  • Use systemd as the init service, its performance super good, but also a good deal to start dependencies, a strong log system, also supports socket-activation.
  • etcd is distributed, and conformance KV storage is used to configure sharing and service discovery.
  • fleet , cluster manager, is the systemd expansion, with multiple machines work, using etcd to manage the configuration and run on each Taiwan CoreOS server.

AWS

Docker container hosting in Amazon there are two ways:

  • Elastic Beanstalk deployed Docker container, it worked very well, but it was too slow, a new deployment takes several minutes, feeling nothing less than the average container seconds start.
  • ECS, Elastic Container Server is the Amazon upstream container cluster solution, is still preview version 3, looks promising, with other Amazon services, through a simple web service call to interact with it.

to sum up

  • Docker is here to stay
  • Resolved dependency issues
  • All aspects of the container are very fast
  • There are cluster solutions, but not seamlessly

Original links: A Not Very Short Introduction to Docker (translation: He Lin Chong school: Song Yu)

=============================

Translator introduction <br /> He Lin Chong, currently working at Tencent Computer Systems Limited, responsible for the game automation operation and maintenance system architecture design and development work, love open source technology. Hoping to contribute to the community through the translation of technical articles.

    Heads up! This alert needs your attention, but it's not super important.