Progress and hype in AI research
It is quite possible that people in the future will wonder why so many people back in 2019 thought playing Go and other fixed games in simulated environments after long training had anything to do with intelligence.
Intelligence is more about adapting/transferring old knowledge to new task (playing Quake Arena quite good without any training after mastering Doom) than it is about compressing experience into heuristics to predict outcome and determining action to maximize predicted outcome value in given state (playing Quake Arena quite good after million games after mastering Doom).
Human intelligence is about ability to adapt to physical and social world, and playing Go is a particular adaptation performed by human intelligence, and developing algorithm to learn to play Go is more performant adaptation, and developing mathematical theory to play Go might be even more performant.
It makes more sense to compare a human and AI not by effectiveness/efficiency of end product of adaptation (in games played between human and agent) but by effectiveness/efficiency of process of adaptation (in games played between human-coded agent and machine-learned agent after limited practice).
Dota 2, StarCraft 2, Civilization 5 and probably even GTA 5 might be solved in not so distant future but ability to play any new game at human level with no prior training would be way more significant.
The second biggest issue with AI is lack of robustness in a long tail of uncommon cases (and critical ones in medicine, self-driving vehicles, finance) which presently can't be handled with accuracy even close to acceptable .
Complex models exploit any patterns that relate input to output variables but some patterns might not hold for cases poorly covered by training data [10a]. >99% of healthcare applications use simple models such as logistic regression with heavily engineered features — for better robustness on outliers [10b][10c][10d]. The clear trend in popular web services is more feature engineering (converting domain knowledge into code to compute statistics, obtaining better performance using more relevant knowledge) — not more complex Deep Learning models.
For an agent in simulated environment like Go or Quake, true model of environment is either known or available so that agent can generate any amount of training data. Finding out correlations in that data isn't intelligent — for real-world problems discovering true model is key.
For an organism, real world is not a fixed game with known environment and rules such as Go or Quake but a game with environment and rules largely unknown and always changing. It has to adapt to unexpected changes of environment and rules including changes caused by adversaries. It has to be capable of wide autonomy as opposed to merely automation necessary to play some fixed game.
It might turn out to be impossible to have self-driving vehicles and humanoid robots operating alongside humans without training them to obtain human-level adaptability to real world. It might turn out to be impossible to have personal assistants substituting humans in key aspects of their lives without training them to obtain human-level adaptability to social world .
knowledge vs intelligence
Knowledge is some information, such as observations or experiences, compressed and represented in some computable form, such as text in natural language, mathematical theory in semi-formal language, program in formal language, weights of neural network or synapses of brain.
Knowledge is about tool (theory, algorithm, physical process) to solve problem. Intelligence is about applying (transferring) and creating (learning) knowledge . There is knowledge how to solve problem (algorithm for computer, instructions for human), and then there is process of applying knowledge (executing program by computer, interpreting and executing instructions by human), and then there is process of creating knowledge (inductive inference/learning from observations and experiments, deductive reasoning from inferred theories and learned models).
Alpha(Go)Zero is way closer to knowledge how to solve particular class of problems than to an intelligent agent capable of applying and creating knowledge. It is a search algorithm like IBM Deep Blue with heuristics being not hardcoded but being tuned during game sessions. It can't apply learned knowledge to other problems — even playing on smaller Go board. It can't create abstract knowledge useful to humans — even simple insight on Go tactics.
TD-Gammon from 1992 is considered by many as the biggest breakthrough in AI [13a]. Note that TD-Gammon didn't use Q-learning — it used TD(λ) with online on-policy updates. TD-Gammon's author used its variation to learn IBM Watson's wagering strategy [13b].
Alpha(Go)Zero is also roughly a variation of TD(λ) . TD-Gammon used neural network trained by Temporal Difference learning with target values calculated using tree search with depth not more than three and using outcomes of games played to the end as estimates of leaf values. Alpha(Go)Zero used deep neural network trained by Temporal Difference learning with target values calculated using Monte-Carlo Tree Search with much bigger depth and using estimates of leaf values and tree policy calculated by network without playing games to the end.
Qualitative differences between Backgammon and Go as problems and between TD-Gammon and Alpha(Go)Zero as solutions (scale of neural network and number of played games being major differences) are not nearly as big as qualitative differences between perfect information games such as Go and imperfect information games such as Poker (Alpha(Go)Zero and Libratus can't be used correspondingly for Poker and Go).
IBM Watson, the most advanced question answering system by far in 2011, is not an intelligent agent. It is knowledge represented as 100Ks lines of hand-crafted logic for searching / manipulating sequences of words and generating hypotheses / supporting arguments plus few hundred parameters tuned with linear regression for weighing in different pieces of knowledge for each supported type of question / answer . It's not that much different conceptually from database engines which use statistics of data and hard-coded threshold values to construct a plan for executing given query via selecting and pipe-lining a subset of implemented algorithms for manipulating data.
Google Duplex must be a heavily engineered solution for very narrow domains and tasks which involves huge amount of human labor to write rules and to label data. Google was reported to employ 100 of PhD linguists working just on rules and data for question answering in Google Search and Assistant .
IBM Debater is a heavily engineered solution for finding and summarizing texts relevant to given topic . It can't answer opponent's arbitrary questions about its own arguments because it neither has or learns any model of domain it argues about.
what is intelligence
Biologists define intelligence as ability to find non-standard solutions for non-standard problems and distinguish it from reflexes and instincts defined as standard solutions for standard problems. Playing Go can't be considered a non-standard problem for AlphaGo after playing millions of games. Detecting new malware can be considered a non-standard problem with no human-level solution so far.
Necessity to adapt/survive provides optimization objectives for organisms to guide self-organization and learning/evolution. Some organisms can set up high-level objectives for themselves after being evolved/trained to satisfy low-level objectives.
Most AI researchers focus on top-down approach to intelligence, i.e. defining objective for high-level problem (such as maximizing expected probability of win by Alpha(Go)Zero) and expecting agent to learn good solutions for low-level subproblems. This approach works for relatively simple problems like games in simulated environments but requires enormous amount of training episodes (several orders of magnitude more than amount which can be experienced by agent in real world) and leads to solutions incapable of generalization (AlphaGo(Zero) trained on 19x19 board is useless for 9x9 board without training from scratch). Hardest high-level problems which can be solved by humans are open-ended — humans don't search in fixed space of possible solutions unlike AlphaGo. Being informed and guided by observations and experiments in real world, humans come up with good subproblems, e.g. special and general relativity.
A few AI researchers [section "possible directions"] focus on bottom-up approach, i.e. starting with low-level objectives (such as maximizing predictability of environment or of effect on environment), then adding higher-level objectives for intrinsic motivation (such as maximizing learning progress or available future options), and only then adding high-level objectives for problems of interest (such as maximizing game score). This approach is expected to lead to more generalizable and more robust solutions for high-level problems because learning with low-level objectives leads to agent also learning self-directing and self-correcting behavior helpful in non-standard or dangerous situations with zero information about them effectively provided by high-level objective. It is quite possible that some set of universal low-level objectives might be derived from a few equations governing flow of energy and information, so that optimization with those objectives might lead to intelligence of computers in an analogous way to how evolution of the Universe governed by laws of physics leads to intelligence of organisms.
While solving high-level problems in simulated environments such as Go had successes, solving low-level problems such as vision and robotics are yet to have such successes. Humans can't learn to play Go without first learning to discern board and move stones. Computer can solve some high-level problems without ability to solve low-level ones when high-level problems are abstracted away from low-level subproblems by humans. It is low-level problems which are more computationally complex for both humans and computers although not necessarily more complex as mathematical or engineering problems. It is low-level problems which are prerequisites for commonsense reasoning, i.e. estimating plausibility of given hypothesis from given observations and previously acquired knowledge, which is necessary for machine to adapt to arbitrary given environment and to solve arbitrary high-level problem in that environment .
The biggest obstacle to applications in real-world environments as opposed to simulated environments seems to be underconstrained objectives for optimization. Any sufficiently complex model trained with insufficiently constrained objective will exploit any pattern found in training data that relates input to target variables but spurious correlations won't generalize to testing data . Even billion examples don't constrain optimization sufficiently and don't lead to major performance gains in image recognition . Agent finds surprising ways to exploit simulated environment to maximize objective not constrained enough to prevent exploits .
One way to constrain optimization sufficiently in order to avoid non-generalizable and non-robust solutions is more informative data for training, such as using physics of real world or dynamics of social world as sources of signal as opposed to simulated environments not nearly as complex and not representative of corner cases in real/social world. Another way is more complex objective for optimization, such as learning to predict not only statistics of interest such as future cumulative rewards conditionally on agent's next actions but also dynamics, i.e. some arbitrary future properties of environment conditionally on some arbitrary hypothetical future events including agent's next actions [28a]. States and rewards correspond to agent's summaries for interactions with environment while dynamics corresponds to agent's knowledge about how environment works [28b]. Progress in learning to predict dynamics of environment might be the strongest form of intrinsic motivation for agent and is another way to constrain optimization [28c][28d].
There is enormous gap between complexity of simulated environments available for present computers and complexity of real-world environments available for present robots so that agent trained in simulated environment can't be transferred to robot in real-world environment with acceptable performance and robustness . Boston Dynamics team never used machine learning to control their robots — they use real-time solvers of differential equations to calculate dynamics and optimal control for models of robots and environments which are not learned from data but specified manually [30a][30b]. MIT researchers didn't use machine learning to control their robot in DARPA Robotics Challenge 2015, and their robot was the only robot which didn't fall or need physical assistance from humans .
Long tails might be not learnable statistically and require reasoning at test time. It might be impossible to encode all patterns into parameters of statistical model. A phenomena might not admit separating data by any number of decision hyper-planes. Not only statistics but dynamics of phenomena might have to be calculated by model. Model might have to be programmed or/and trained to simulate dynamics of phenomena.
It's quite possible that the only way to train/evolve intelligent agent for hard problems in real world (such as robotics) and in social world (such as natural language understanding) might turn out to be:
(1) to train/evolve agent in environment which provides as much constraints for optimization as real and social world (i.e. agent has to be a robot operating in real world alongside humans);
(2) to train/evolve agent on problems which provide as much constraints for optimization as the hardest problems solved by organisms in real world (i.e. robot has to learn to survive without any assistance from humans) and solved by humans in social world (i.e. agent has to learn to reach goals in real world using conversations with humans as its only tool).
Arguably during Deep Learning renaissance period there hasn't been progress in real-world problems such as robotics and language understanding nearly as significant as in fixed games running in simulated environments.
Opinions on progress of AI research from some of the most realistic researchers:
Deep Learning methods are very non-robust in image understanding tasks .
Deep Learning methods haven't come even close to replacing radiologists .
Deep Learning methods are very non-robust in text understanding tasks .
Deep Learning methods can't solve questions from school science tests significantly better than text search based methods .
Deep Learning methods can't pass first levels of the hardest Atari game — it requires abstracting and reasoning from agent .
"Approximating CNNs with Bag-of-local-Features Models Works Surprisingly Well on ImageNet"
"Measuring the Tendency of CNNs to Learn Surface Statistical Regularities"
"Excessive Invariance Causes Adversarial Vulnerability"
"Do ImageNet Classifiers Generalize to ImageNet?"
"Do CIFAR-10 Classifiers Generalize to CIFAR-10?"
"Confounding Variables Can Degrade Generalization Performance of Radiological Deep Learning Models"
"One Pixel Attack for Fooling Deep Neural Networks"
"A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations"
"Semantic Adversarial Examples"
"Why Do Deep Convolutional Networks Generalize so Poorly to Small Image Transformations?"
"The Elephant in the Room"
"Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects"
"Excessive Invariance Causes Adversarial Vulnerability"
"Semantically Equivalent Adversarial Rules for Debugging NLP models"
"On GANs and GMMs"
"Do Deep Generative Models Know What They Don't Know?"
"Are Generative Deep Models for Novelty Detection Truly Better?"
"Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations"
"Simple Random Search Provides a Competitive Approach to Reinforcement Learning"
"Data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and artificial systems."
"Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes"
"Formal Theory of Creativity, Fun, and Intrinsic Motivation"
"By solving a more general problem of physical prediction (to distinguish it from statistical prediction), the input and label get completely balanced and the problem of human selection disappears altogether. The label in such case is just a time shifted version of the raw input signal. More data means more signal, means better approximation of the actual data manifold. And since that manifold originated in the physical reality (no, it has not been sampled from a set of independent and identically distributed gaussians), it is no wonder that using physics as the training paradigm may help to unravel it correctly. Moreover, adding parameters should be balanced out by adding more constraints (more training signal). That way, we should be able to build a very complex system with billions of parameters (memories) yet operating on a very simple and powerful principle. The complexity of the real signal and wealth of high dimensional training data may prevent it from ever finding "cheap", spurious solutions. But the cost we have to pay, is that we will need to solve a more general and complex task, which may not easily and directly translate to anything of practical importance, not instantly at least."
"Rebooting AI — Postulates"
"Intelligence Confuses The Intelligent"
"Intelligence Is Real"
"AI And The Ludic Fallacy"
"The Peculiar Perception Of The Problem Of Perception"
"Statistics And Dynamics"
"Reactive Vs Predictive AI"
"Learning Physics Is The Way To Go"
"Predictive Vision In A Nutshell"
"Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network"
"Fundamental principles of cortical computation: unsupervised learning with prediction, compression and feedback"
"The primary problem in computing today is that computers cannot organize themselves: trillions of degrees of freedom doing the same stuff over and over, narrowly focused rudimentary AI capabilities. Our mechanistic approach to the AI problem is ill-suited to complex real-world problems: machines are the sum of their parts and disconnected from the world except through us, the world is not a machine. Thermodynamics drives the evolution of everything. Thermodynamic evolution is the missing, unifying concept in computing systems. Thermodynamic evolution supposes that all organization spontaneously emerges in order to use sources of free energy in the universe and that there is competition for this energy. Thermodynamic evolution is second law of thermodynamics, except that it adds the idea that in order for entropy to increase an organization must emerge that makes it possible to access free energy. The first law of thermodynamics implies that there is competition for energy."
"The free energy principle seems like an attempt to unify perception, cognition, homeostasis, and action. Free energy is a mathematical concept that represents the failure of some things to match other things they’re supposed to be predicting. The brain tries to minimize its free energy with respect to the world, ie minimize the difference between its models and reality. Sometimes it does that by updating its models of the world. Other times it does that by changing the world to better match its models. Perception and cognition are both attempts to create accurate models that match the world, thus minimizing free energy. Homeostasis and action are both attempts to make reality match mental models. Action tries to get the organism’s external state to match a mental model. Homeostasis tries to get the organism’s internal state to match a mental model. Since even bacteria are doing something homeostasis-like, all life shares the principle of being free energy minimizers. So life isn’t doing four things – perceiving, thinking, acting, and maintaining homeostasis. It’s really just doing one thing – minimizing free energy – in four different ways – with the particular way it implements this in any given situation depending on which free energy minimization opportunities are most convenient."
"The Free-Energy Principle: A Unified Brain Theory?"
"Action and Behavior: a Free-energy Formulation"
"Computational Mechanisms of Curiosity and Goal-directed Exploration"
"Expanding the Active Inference Landscape: More Intrinsic Motivations in the Perception-Action Loop"
"Intelligent system needs to optimize future causal entropy, or to put it in plain language, maximize the available future choices. Which in turn means minimizing all the unpleasant situations with very few choices. This makes sense from evolutionary point of view as it is consistent with the ability to survive, it is consistent with what we see among humans (collecting wealth and hedging on multiple outcomes of unpredictable things) and generates reasonable behavior in several simple game situations."
"All systems perform computations by means of responding to their environment. In particular, living systems compute, on a variety of length- and time-scales, future expectations based on their prior experience. Most biological computation is fundamentally a nonequilibrium process, because a preponderance of biological machinery in its natural operation is driven far from thermodynamic equilibrium. Physical systems evolve via a sequence of input stimuli that drive the system out of equilibrium and followed by relaxation to a thermal bath."
Solving many problems in science and engineering might not need computer intelligence defined above — if computers will still be programmed to solve non-standard problems by humans as it is done today. But some very important (and most hyped) problems such as robotics (truly unconstrained self-driving) and natural language understanding (truly personal assistant) might not admit sufficient progress without such intelligence.