2016 depth study memorabilia, and look at this ~

In 2016, artificial intelligence developed fiery. As an important branch of artificial intelligence in-depth study, is also being more and more attention. 2016 is the depth of learning the rapid development of the year. In this year, both the industry, academia and the masses are devoted to the depth of the study of the torrent. In the industry, Google (Google), Facebook (Facebook), Baidu, Alibaba and a series of large companies at home and abroad have publicly announced that artificial intelligence will be their next strategic focus. On the talent side, following the depth of learning industry leader Andrew Ng (Andrew Ng) joined Baidu, Yann LeCun joined the face book, the major IT companies began to loot the academy Daniel. Feifei Li, a professor at Stanford University, joins Google in November of this year; Alex Smola, a professor at Carnegie Mellon University and Amazon Smash, who grew up in June this year. In terms of tools, Google, Facebook, Baidu, Microsoft, Amazon and other companies have opened up their own depth of learning framework, who can lead the trend of artificial intelligence will become the next IT company's next battlefield.

In academia, the depth of learning continues to promote the development of image recognition, video analysis, speech recognition, speech synthesis, machine translation, natural language processing, man-machine game and other fields. In 2016, the concept of deep learning is no longer limited to university laboratories or top IT companies, with AlphaGo beat the world champion Li Shishi, more unmanned vehicles driving on the road, Prisma launched based on the depth of learning Image style conversion applications, the emergence of automatic writing robot, etc., the public more and more able to feel the changes brought by artificial intelligence. In the following space, the author will take you together to review what the depth of learning in 2016 have occurred in what matters worthy of attention.

March: Alpha win over Li Shishi

On the afternoon of March 15, 2016, Google developed the Go Deep Learning System AlphaGo to win the South Korean player Li Shishi with a total score of 4: 1, becoming the first intelligent system to beat the human chess champion on the 19×19 board The AlphaGo defeated Li Shishi to the concept of deep learning from the academic community to the public, and ignited the public for the great enthusiasm of artificial intelligence. Although AlphaGo is not the first system to beat the human world champion, but AlphaGo victory is definitely a milestone in the history of artificial intelligence.

And 1997 IBM's smart system Deep blue beat chess world champion Kasparov different, completely rely on the computer's computing speed is almost impossible in the 19 × 19 chess board to overcome the human. AlphaGo needs to use a more intelligent way to beat the Human World Champion on a complete Go chess board. Deep learning techniques provide the possibility for this approach. In the core component of AlphaGo, the Value Network and the Policy Network all use deep learning techniques, which is the real brain behind AlphaGo.

Although AlphaGo defeated Li Shishi to artificial intelligence to a new height, but its ability should not be too much amplification, and should not think that artificial intelligence beyond the era of mankind is coming soon. Because AlphaGo can solve only the problem that is defined only in a particular environment, it is also necessary for the researchers to make more efforts to apply the artificial intelligence system to the open environment. This will also be the future direction of AI development.

April: TensorFlow distributes distributed versions

Although TensorFlow as early as November last year, officially open source, but in the early days of open source has been a lot of controversy. The biggest problem with standalone TensorFlow is that it can not effectively use massive amounts of data, which is one of the biggest benefits of deep learning. Let's take Google's Inception-v3 model as an example. The model can achieve a 95% correct rate on the ImageNet dataset. However, in the stand-alone on the Inception-v3 model training to 78% accuracy needs to be close to six months. If you want to train to 95% of the correct rate will take several years. This is simply unbearable in the production environment.

屏幕快照_2017-01-05_上午11.43_.58_.png

To solve this problem, TensorFlow released version 0.8.0 in April of this year. From this release, TensorFlow began supporting distributed model training. Distributed TensorFlow can greatly accelerate the training process of neural networks, the figure shows the distributed TensorFlow acceleration ratio. From the figure we can see that through the 100 GPU parallel way, Inception-v3 model training speed can be accelerated 65 times, which makes the original six months of training process can be less than 3 days to get the results. This also marks the TensorFlow from a toy into a real tool. Shortly after TensorFlow 0.8.0 was released, DeepMind also announced that all systems will be developed based on TensorFlow.

Although TensorFlow already supports distributed, but if the analog TensorFlow and Hadoop system, then TensorFlow equivalent to only achieve the Hadoop system Mapreduce computing framework of the part. There is still a threshold for TensorFlow to really apply to the real production environment. In order to solve this problem, the cloud technology will Kubernetes and TensorFlow combination, through Kubernetes TensorFlow task to achieve the monitoring, scheduling and management, so that TensorFlow use threshold becomes lower.

June: Prisma image style conversion App online

Prisma is a mobile phone that changes the image style through deep learning techniques. After the application went online, it downloaded more than seven million times in just one week and had more than one million active users. The introduction of the application marks the depth of learning technology is not only a science, it can be applied to the field of art. The following figure shows the effect of the image after Prisma processing. The emergence of Prisma will be the depth of learning technology from the advanced academic research to the public's daily life, let it be more widely understood by the public. After the software, more images, video style conversion (such as the introduction of Facebook Caffe2Go), automatic music composition and other software has been launched.

屏幕快照_2017-01-05_上午11.44_.32_.png

July: Google Smart Data Center

Following AlphaGo, Google's DeepMind team used in-depth learning techniques for intelligent data centers. With reinforcement learning, the new data center intelligent system can better match the fan and data center air conditioning in the machine to ensure that all machines can be cooled and the energy consumption can be minimized. By controlling more than 120 different devices in the data center, intelligent data centers can save about 15 percent of their energy costs each year to save millions of dollars in costs for Google. And this is only the depth of learning in the intelligent data center applications began, DeepMind team is still trying to install more sensors and controllers make the data center energy efficiency can be further improved.

August: SyntaxNet publishes a 40-language parsing model

In May this year, Google released SyntaxNet, a natural language understanding (NLU) algorithm based on depth learning, and provided a trained English grammar parser, Parsey McParseface. On a randomly selected Penn Treebank news data set, the parser can achieve more than 94% accuracy. This correct rate has exceeded all previous algorithms and has been very close to the approving rate of about 96% -97% between different linguists. Different linguists may have different analyzes of the same sentence, and the recognition rate depicts the high probability of their mutual recognition, which generally gives the theoretical limit that the computer can achieve. But this is only in the grammar is very standardized news data set, Google from the page on the Web Treebank data set, Parsey McParseface can reach about 90% of the correct rate.

屏幕快照_2017-01-05_上午11.45_.03_.png

Following the addition of Parsey McParseface, Google opened source analysis models in 40 other languages ​​in August and supported both Text segmentation and Morphological Analysis. So far, the SyntaxNet open source model has been able to analyze the mother tongue covering more than half of the world's population, and in most languages, the accuracy of the analysis is currently the highest in the world. The above diagram shows the syntactic analysis of Chinese sentences using the SyntaxNet Chinese analytic model. Depth learning will be the natural language processing problems in the most basic analysis of the problem and forward a big step forward. The open source of these models will greatly accelerate the research progress in the field of natural language processing.

September: Google Online is based on the depth of learning machine translation

In September this year, Google officially released a neural network based machine translation system (Googel Neural Machine Translation system, GNMT). The system based on the depth of learning technology, can greatly improve the accuracy of translation. Compared with the traditional machine translation algorithm based on phrase translation, the translation algorithm based on depth learning can translate the whole sentence directly, which can greatly simplify the design of the translation system and make more efficient use of mass training data. According to Google's experimental results, in the main language, based on the depth of learning translation algorithm can improve the quality of translation results 55% to 85%. The following table compares the results of different algorithms translating the same sentence. From this sentence, we can visually see the depth of learning algorithm to bring the quality of the translation.

Comparison of translation effects of different translation algorithms:

屏幕快照_2017-01-05_上午11.45_.31_.png

Beginning in September this year, all translation requests from Chinese to English are translated by Google-based translation systems. Google's deep learning-based translation system is implemented entirely through its open source product, TensorFlow, which currently handles nearly 20 million translation requests per day. Translated from the Chinese to English is only a language translation supported by Google translation, and then Google will be based on the depth of learning translation algorithm applied to more language pairs.

November: DeepMind and Blizzard began collaborating on StarCraft 2

In March of this year, the DeepMind team developed AlphaGo to beat the Human Go World Championship is not the end of the man-machine game, on the contrary, it's just a start. DeepMind in November this year, officially opened and Blizzard game company's cooperation, will their next goal in the challenge StarCraft 2 this real-time strategy game. Compared to Go, StarCraft 2 is a more open environment, for the depth of the learning system design and index-level increase. First of all, although the 19 × 19 chess board may have a variety of different states, but the total number of StarCraft 2 state is almost unlimited, coupled with the game on the immediate requirements, so the depth of the study will be higher The request. Second, StarCraft 2 is a system of information asymmetry, players can only see their own maps, which requires the depth of learning system to "situation" to make judgments.

At BlizzCon 2016, Blizzard announced that it would develop a more friendly API for the depth learning system, thus opening a partnership with the DeepMind team. The right side of the figure shows the normal view of StarCraft 2, while the left shows the perspective provided to the depth of learning, so you can easily learn the system better access to information. I believe that in the near future, the depth of learning will be more applied to the open environment. Depth learning system will be more competitive in the victory over the human at the same time, will also be more areas of human liberation from the repeated work.

December: DeepMind Lab open source

In order to allow the depth of the learning system to learn how to solve complex problems, following the OpenAI open source universe project, DeepMind in December this year, also open source DeepMind Lab. DeepMind Lab is a first-person 3D gaming platform designed specifically for artificial intelligence research. In this game platform, the agent needs to complete a similar collection of fruit, take the maze, through the cliff of the channel, the use of the launch pad in space to move and other tasks. Now DeepMind Lab has become a major research platform within DeepMind.

Looking ahead 2017 years

2017, I believe that the depth of learning in the following areas to achieve a qualitative breakthrough:

Deep learning will move from university laboratories and top IT companies to the public, and more companies will solve practical problems through deep learning techniques. With the depth of learning tools and the maturity of technology, more and more individuals and businesses will enjoy the benefits of deep learning technology.

Depth learning will cover more areas. The depth of learning from 2012 has broken through the bottleneck of traditional image recognition technology and has won the title of the ILSCRC (ImageNet Large Scale Visual Recognition Challenge) competition, where depth learning can be applied to more and more areas. In 2017, I believe that the depth of learning will continue to break through the bottleneck of traditional technology, and will be applied to genetic technology, personalized medical care, from the media, public safety, art, finance and other fields.

With AlphaGo defeating Li Shishi, the depth learning system has made breakthrough progress in a closed environment. In 2017, it is believed that the depth learning system will be more of an attempt to apply in an open environment. Whether it is unmanned or intelligent StarCraft 2 players or DeepMind Lab will be the depth of learning in an open environment attempt.

Author information Zheng Zeyu, co-founder of Caicloud.io, chief data scientist. Its team successfully developed the world's first mature TensorFlow depth learning platform (TensorFlow as a Service), to solve the distributed TensorFlow difficult, difficult to manage, difficult to monitor, on-line and other issues. Based on this platform, the cloud data team for security, electricity, finance, logistics and other industries to provide targeted artificial intelligence solutions. Before returning to business, Zheng Zeyu served as senior engineer of the United States Google. From 2013 to join Google, Zheng Zeyu as a major technical staff to participate in and lead a number of large data projects. The product clustering project presented and dominated by him is used to link Google Shopping and Knowledge Graph data to make it easier to replace the traditional product list ads in the form of knowledge cards, opening a new era of Google Shopping ads in search pages The He received a master's degree in Computer Science from Carnegie Mellon University (CMU) in May 2013, with several academic papers at the top international academic conference and a Siebel Scholarship.

    Heads up! This alert needs your attention, but it's not super important.