AI emerged in the mid-50s as a sister project to Cognitive Science. Since then, it developed independently with occasional convergences and divergences. Sixty years later, the two fields have made tremendous progress - the former by developing powerful statistical learning methods, and the latter through a profusion of detailed empirical tools and results.
Now is the time to reunite these two strands, in the spirit of the early Macy conferences, but with new tools and methods. Here are 4 reasons why AI needs cognition:
1. Problem formulation: Terms like 'information', 'computability', 'deductions', and 'inference' have received a formal definition, enabling a fruitful mathematical treatment. The term 'intelligence' has not. Why? One reason might be that intelligence evolves in organisms within a particular ecology, which makes it not a uniform property, but rather a collection of tricks, each of them tied to particular environmental and biological problems. If that is so, scientists who study the intelligence embedded in various organisms (humans, animals) are critically important to define what is it that AI is trying to replicate.
2. Cognitively / neuro-inspired algorithms: Machine learning has replaced hand-crafted AI systems by generic ones with many parameters optimized over large datasets. AI algorithms that are constructed on the above bases tend to be extremely tied to the data on which they have been trained. Human intelligence, in contrast, tends to be much more flexible, data efficient and resilient to noise. Reverse engineering how the human brain achieves this can lead to novel and more usable algorithms.
3. Evaluation & explanation: Current Deep Learning systems solve problems in a way that can be challenging to understand. Cognitive scientists have devised ingenious methods to study the components of intelligence in various organisms, providing the equivalent of unit tests of these components. These can be used, at least as diagnostic tests, or even provide objective functions to improve Deep Learning architectures.
4. Humans in the loop: Many AI applications are to be used by humans. If they are developed without humans and society in the loop, we can end up with dysfunctional technologies with potentially damaging consequences. Cognitive Science can help define metrics to measure the cognitive impact of AI applications and how they are perceived, understood and valued across cultures and contexts.
Vice versa, AI can bring to Cognitive Science quantitative models which can be viewed as implemented theories of various cognitive capacities like language, vision, decision, etc. These models can help refine our understanding of how the mind and brain work.
It is widely assumed that humans handle linguistic productivity by means of algebraic compositional rules: Are deep networks similarly compositional? After reviewing the main innovations characterizing current deep language processing networks, I discuss a set of studies suggesting that deep networks are capable of subtle grammar-dependent generalizations, but also that they do not rely on systematic compositional rules.
The development of intelligent machines is one of the biggest unsolved challenges in computer science. In this paper, we propose some fundamental properties these machines should have, focusing in particular on communication and learning.
Tomas Mikolov, Armand Joulin, Marco Baroni