1956-1974: The First Wave of AI
In 1956, Professor McCarthy of Stanford University, Professor Minsky of Massachusetts Institute of Technology, Professor Simont and Newell of Carnegie Mellon University (all four Turing Prize winners) and Shannon of Bell Labs (information theory proponent), Scholars such as Rochester of IBM Corporation established the concept of "artificial intelligence" for the first time at Dartmouth College in the United States, that is, allowing machines to understand, think and learn like humans, namely, using computers to simulate human intelligence.
At the Dartmouth Conference in 1956, the name and mission of AI were determined. At the same time, the initial achievements and the first batch of researchers appeared. Therefore, this event was widely recognized as a sign of the birth of AI.
The Dartmouth Conference promoted the emergence of the first wave of AI in the world. During this period, symbolism, machine proof, AI logic language, Bellman formula and perceptron model developed rapidly. A large number of successful AI programs and new research directions continue to emerge. Research scholars believe that machines with complete intelligence will appear within twenty years. The U.S. government has invested large sums of money in this emerging field, investing millions of dollars each year in the four research institutions of Massachusetts Institute of Technology, Carnegie Mellon University, University of Edinburgh, and Stanford University, and allowed research scholars to do anything they are interested in. Direction.
The pioneers of artificial neural network research, McCulloch and Pitts, proposed in 1943 an idea called "mindlike machine", which can be manufactured by interconnecting models based on the characteristics of biological neurons. This is the concept of neurological networks.
Major achievements at the time:
In 1949, neuropsychologist Donald Hebb published "Organization of Behavior". In the book, Hebb proposed the "Hebb Rule".
In 1957, the experimental psychologist Frank Rosenblatt of Cornell University simulated the realization of the "Perceptron" neural network model. This model can complete some simple visual processing tasks.
In the 1950s, Bellman (Bellman) proposed the Bellman formula, also known as the dynamic programming formula, which is the prototype of enhanced learning.
Arthur Samuel proposed the machine learning theory. Based on this theory, he compiled a checkers program that could play against humans and defeated American checkers master in 1962.
Stanford Research Institute (Stanford Research Institute; SRI) developed the Shakey intelligent robot from 1966 to 1972, which can form its own actions according to logic and can be regarded as the world's first intelligent robot.
1974-1980: The First Winter of AI
In the 1970s, AI began to be criticized and undergo financial difficulties. Due to the insufficient estimation of the difficulty of the project by AI researchers, not only the cooperation plan with the US Defense Advanced Research Projects Agency failed, but also the prospect of AI was cast a shadow. At the same time, the pressure of public opinion has gradually begun to press on AI, resulting in a lot of research funding being transferred to other projects. In 1973, the United Kingdom published the James Lighthill report. The report mainly judged automata, robots and central nervous system in the basic research of AI, and concluded: "The research of automata and central nervous system is valuable, but the progress is disappointing. The research of robots is not valuable, and the progress is very disappointing. It is recommended to cancel the research on robots. Since then, AI has entered the first winter.
The technical bottleneck faced at that time:
First, the limited memory and processing speed of computers by then were not enough to solve any actual AI problem; second, the complexity of the problem, the computational complexity of many problems increased exponentially; third, the amount of data was severely lacking, yet it was impossible to find a large enough database to support the program for deep learning, which easily made machine unable to read a sufficient amount of data to achieve intelligent.
1980-1987: The Second Wave of AI
In 1980, Carnegie Mellon University designed an "expert system" called XCON for digital equipment companies. This is a system using AI programs, which can be simply understood as a combination of "knowledge base plus reasoning engine". XCON is a computer intelligence system with complete professional knowledge and experience. This system helps company save more than four thousand dollars annually before 1986. With this business model, hardware and software companies such as Symbolics, Lisp Machines, etc. and IntelliCorp, Aion, etc. have emerged. The Ministry of Economy, Trade and Industry of Japan allocated 850 million US dollars to support the fifth-generation computer project. AI witnessed the second wave of development, during which artificial mathematical models including multilayer neural networks and backpropagation algorithms were proposed.
In 1982, Hopfield, a professor of biophysics at the California Institute of Technology, proposed a new neural network that can solve a large class of pattern recognition problems and can also give an approximate solution to a class of combinatorial optimization problems. This neural network model came to be known as the Hopfield network.
1987-1993: The Second Winter of AI
Scientists later discovered that although the expert system is very useful, its application areas are too narrow. Moreover, the maintenance cost of the expert system is relatively high. By 1987, the performance of desktop computers produced by Apple and IBM surpassed general-purpose computers produced by manufacturers such as Symbolics, and the cost was much lower than that of expert systems. Since then, the expert system has lost its glory, and accordingly AI has fallen into a trough again.
1993 Till Now: The Third Wave of AI
AI technologies such as machine learning and image recognition are applied in people's real life, and mathematical models including graph models, graph optimization, and deep neural networks have been re-proposed and studied. Meanwhile, with more powerful computing capabilities being applied to AI research, the research efficiency has been significantly improved. Owing to this series of breakthroughs, AI started the third research boom.
Major achievements at the time:
On May 11, 1997, IBM's computer system "Deep Blue" defeated the world chess champion Kasparov, which once again triggered a phenomenon-level discussion of AI topics in the public domain. This is an important milestone in the development of AI.
In 1996, Vapnik proposed a support vector machine (SVM) based on statistical learning theory. After 15 years, SVM attracted much attention, and achieved a great deal of results regarding face detection, verification and recognition, speaker/speech recognition, text/handwriting recognition, image processing and so on.
In 2006, Hinton made a breakthrough in the field of deep learning of neural networks, and thus people once again saw the hope of machines catching up with mankind, which was also a landmark technological advancement.
In 2009, several Stanford scholars showed the world that using GPUs can train deep neural networks in a reasonable time. This directly triggered a wave of GPU general-purpose computing.
In 2016, Google's AlphaGo defeated South Korean Go chess player Lee Sedol, which once again sparked an AI boom.
In 2017, Apple released iPhone X, which supports Face ID authentication.
After more than 60 years of development, AI in terms of certain fields, that is, specific AI, has achieved successful applications and breakthroughs, but the research and application of general AI still has a long way to go.
AI 2.0 era
With the popularization of the Internet, the penetration of sensor networks, the emergence of big data, the rise of information communities, as well as the cross-fusion and interaction of data and information in human society, physical space and information space, the information environment and data foundation for the development of AI have undergone profound changes. The goals and concepts of AI are facing important adjustments. The scientific foundation and realization carrier of AI are also facing new breakthroughs. AI is entering a new stage, the era of AI 2.0.
AI 2.0 is a new generation based on the new information environment with major changes and new development goals. Among them, the new information environment refers to: the popularization of the Internet and mobile terminals, the penetration of sensor networks, the emergence of big data, and the rise of online communities. The new goal refers to the new needs of smart cities, smart economy, smart manufacturing, smart medical care, smart home, smart driving, etc. from macro to micro.
Big Data Intelligence
Big data showing knowledgeable is manifested in DeepMind's AlphaGo technology as a hot topic. Different from the traditional artificial knowledge of game, AlphaGo deeply reinforces "intuitive perception" (where is the next step), "game reasoning" (what is the overall chance of winning), and "novel position" (thinking what you can't think of). It combines the memory of human chess game with self-game accumulation of chess game. In addition, DeepMind's software also controls 120 variables such as the cooling system, fans, and windows of Google's data center, increasing its power efficiency by 15%, and thus saves hundreds of millions of dollars in electricity bills in a few years. In this field, the current deep learning technology is very important, but its shortcomings are being not explainable and universal enough. Solving such problems will lead to the development of big data intelligence.
Internet Swarm Intelligence
In 2016, The Power of Group Intelligence on Science divided group intelligence computing into three types according to the degree of difficulty: Crowdsourcing, implementing task allocation; Complex workflows, more complex and supports workflow mode; Problem solving ecosystem, most complex collaborative problem solving ecosystem model. The Connectome project of Princeton University developed the EyeWire game, allowing players to color individual cells and their neuron connections in microscopic images according to their functions. More than 165,000 scientists (and volunteers) from 145 countries participated in this game, thus describing in detail for the first time how the neural tissue of the mammalian retina detects the structure-function relationship of motion. Swarm intelligent computing can greatly improve the intelligence level of human society and has a wide range of important uses. At present, its theory and technology are still in the initial stage.
The world-famous "Pokemon GO" game uses cross-media augmented reality technology to organically combine 3D graphics with real-time mobile phone video. In recent years, with the continuous development of computer networks, multimedia, and mobile terminals, global data has shown explosive growth in multimedia. Cross-media intelligence is the basic intelligence that enables machines to recognize the external environment. The semantic link between language, vision, graphics, and hearing is the key to realize intelligent behaviors such as association, design, generalization, and creation.
Human-machine Hybrid Enhanced Intelligence
At present, various wearable devices, intelligent driving, exoskeleton devices, and man-machine collaborative surgery have appeared one after another, indicating that man-machine collaborative enhanced intelligent systems will have a broad development prospect.
Autonomous Intelligent System
Since the birth of AI, robots have been included in its target field, and bionics has naturally become an important development direction. At this stage, the rapid development of unmanned aircraft and unmanned vehicles has far exceeded than that of robots.
The Historic Events of AI
In 1956, the Dartmouth Conference marked the birth of AI
In 1957, the neural network Perceptron was invented by Rosenblatt
In 1970, limited by computing power, the first AI winter came
In 1980, the XCON expert system appeared, saving $40 million per year
From 1990 to 1991, AI computer DARPA failed which led the reduction of government investment. The second AI winter came
In 1997, IBM's Deep Blue defeated the world chess champion
In 2006, Hinton proposed a "deep learning" neural network
In 2011, Apple’s Siri came out with continuous technological innovation
In 2012, Google's self-driving car hit the road (announced in 2009)
In 2013, deep learning algorithms made major breakthroughs in speech and visual identity, with recognition rates exceeding 99% and 95%
In 2016, Google released the machine learning chip TPU; AlphaGo of the Deepmind team defeated the Weiqi(Go chess) champion
In 2017, the graph convolutional neural network was proposed; Apple released IPhone X to support Face ID authentication
In 2018, Microsoft released a machine translation system, comparable to human translation
After 2018, here it comes the new era of AI 2.0