Connectionism

connectionism

Connectionism [kuh-nek-shuh-niz-uhm] is the theory that the connections (such as between brain cells) mediate thought and govern behavior. It is a set of approaches in the fields of artificial intelligence and cognitive science that model mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network models (artificial brains).

The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could represent synapses. Neural networks are able to learn by themselves, unlike normal computers, which cannot do anything for which they are not programmed.

In most connectionist models, networks change over time. A closely related and very common aspect of connectionist models is activation. At any time, a unit in the network has an activation, which is a numerical value intended to represent some aspect of the unit. For example, if the units in the model are neurons, the activation could represent the probability that the neuron would generate an action potential spike. If the activation spreads to all the other units connected to it. Spreading activation is always a feature of neural network models, and it is very common in connectionist models used by cognitive psychologists.

The neural network branch of connectionism suggests that the study of mental activity is really the study of neural systems. This links connectionism to neuroscience, and models involve varying degrees of biological realism. Connectionist work in general need not be biologically realistic, but some neural network researchers, computational neuroscientists, try to model the biological aspects of natural neural systems very closely in so-called ‘neuromorphic networks.’ Many authors find the clear link between neural activity and cognition to be an appealing aspect of connectionism. This has been criticized as reductionist.

The prevailing connectionist approach today was originally known as parallel distributed processing (PDP). It was an artificial neural network approach that stressed the parallel nature of neural processing, and the distributed nature of neural representations. It provided a general mathematical framework for researchers to operate in. A perceived limitation of PDP is that it is reductionistic. That is, all cognitive processes can be explained in terms of neural firing and communication. A lot of the research that led to the development of PDP was done in the 1970s, but PDP became popular in the 1980s with the release of the books ‘Parallel Distributed Processing: Explorations in the Microstructure of Cognition’ by James L. McClelland, David E. Rumelhart, and the PDP Research Group. The books are now considered seminal connectionist works, and it is now common to fully equate PDP and connectionism, although the term is not used in the books.

PDP’s direct roots were the perceptron theories of researchers such as Frank Rosenblatt from the 1950s and 1960s. But perceptron models were made very unpopular by the book ‘Perceptrons’ by Marvin Minsky and Seymour Papert, published in 1969. It demonstrated the limits on the sorts of functions that single-layered perceptrons can calculate. Many earlier researchers advocated connectionist style models, for example in the 1940s and 1950s, Warren McCulloch, Walter Pitts, and Donald Olding Hebb. McCulloch and Pitts showed how neural systems could implement first-order logic. They were influenced by the important work of Nicolas Rashevsky in the 1930s. Hebb contributed greatly to speculations about neural functioning, and proposed a learning principle, Hebbian learning, that is still used today.

Many connectionist principles can be traced to early work in psychology, such as that of William James. Psychological theories based on knowledge about the human brain were fashionable in the late 19th century. As early as 1869, the neurologist John Hughlings Jackson argued for multi-level, distributed systems. Following from this lead, Herbert Spencer’s ‘Principles of Psychology,’ and Sigmund Freud’s ‘Project for a Scientific Psychology’ propounded connectionist or proto-connectionist theories. These tended to be speculative theories. But by the early 20th century, Edward Thorndike was experimenting on learning that posited a connectionist type network. In the 1950s, Friedrich Hayek proposed that spontaneous order in the brain arose out of decentralized networks of simple units. Hayek’s work was rarely cited in the PDP literature until recently.

As connectionism became increasingly popular in the late 1980s, there was a reaction to it by some researchers, including Jerry Fodor, Steven Pinker, and others. They argued that connectionism, as it was being developed, was in danger of obliterating what they saw as the progress being made in the fields of cognitive science and psychology by the classical approach of computationalism (the view that the human mind and/or human brain is an information processing system and that thinking is a form of computing). Computationalism is a specific form of cognitivism that argues that mental activity is computational, that is, that the mind operates by performing purely formal operations on symbols, like a Turing machine (a device that manipulates symbols on a strip of tape according to a table of rules, an analog computer). Some researchers argued that the trend in connectionism was a reversion toward associationism (a concept of mental processes proven to bear little resemblance to modern neurophysiology) and the abandonment of the idea of a language of thought, something they felt was mistaken. However, it was those very tendencies that made connectionism attractive for other researchers.

Connectionism and computationalism need not be at odds, but the debate in the late 1980s and early 1990s led to opposition between the two approaches. Throughout the debate, some researchers have argued that connectionism and computationalism are fully compatible, though full consensus on this issue has not been reached. The differences between the two approaches that are usually cited are the following: Computationalists posit symbolic models that do not resemble underlying brain structure at all, whereas connectionists engage in ‘low-level’ modeling, trying to ensure that their models resemble neurological structures. Computationalists in general focus on the structure of explicit symbols (mental models) and syntactical rules for their internal manipulation, whereas connectionists focus on learning from environmental stimuli and storing this information in a form of connections between neurons. Computationalists believe that internal mental activity consists of manipulation of explicit symbols, whereas connectionists believe that the manipulation of explicit symbols is a poor model of mental activity. Computationalists often posit domain specific symbolic sub-systems designed to support learning in specific areas of cognition (e.g., language, intentionality, number), whereas connectionists posit one or a small set of very general learning mechanisms.

But, despite these differences, some theorists have proposed that the connectionist architecture is simply the manner in which the symbol manipulation system happens to be implemented in the organic brain. This is logically possible, as it is well-known that connectionist models can implement symbol manipulation systems of the kind used in computationalist models, as indeed they must be able if they are to explain the human ability to perform symbol manipulation tasks. But the debate rests on whether this symbol manipulation forms the foundation of cognition in general, so this is not a potential vindication of computationalism. Nonetheless, computational descriptions may be helpful high-level descriptions of cognition of logic, for example.

The debate largely centred on logical arguments about whether connectionist networks were capable of producing the syntactic structure observed in this sort of reasoning. This was later achieved, although using processes unlikely to be possible in the brain, thus the debate persisted. Today, progress in neurophysiology, and general advances in the understanding of neural networks, has led to the successful modelling of a great many of these early problems, and the debate about fundamental cognition has, thus, largely been decided among neuroscientists in favor of connectionism. However, these fairly recent developments have yet to reach consensus acceptance among those working in other fields, such as psychology or philosophy of mind.

Part of the appeal of computational descriptions is that they are relatively easy to interpret, and thus may be seen as contributing to our understanding of particular mental processes, whereas connectionist models are in general more opaque, to the extent that they may be describable only in very general terms (such as specifying the learning algorithm, the number of units, etc.), or in unhelpfully low-level terms. In this sense connectionist models may instantiate, and thereby provide evidence for, a broad theory of cognition (i.e., connectionism), without representing a helpful theory of the particular process that is being modelled. In this sense the debate might be considered as to some extent reflecting a mere difference in the level of analysis in which particular theories are framed.

The recent popularity of dynamical systems in philosophy of mind have added a new perspective on the debate; some authors now argue that any split between connectionism and computationalism is more conclusively characterized as a split between computationalism and dynamical systems: a concept in mathematics where a fixed rule describes the time dependence of a point in a geometrical space (e.g. the motion of a pendulum or flow of water in a pipe). The recently proposed Hierarchical temporal memory (HTM) model may help resolving this dispute, at least to some degree, given that it explains how the neocortex extracts high-level (symbolic) information from low-level sensory input.  HTM is a machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex. It is a biomimetic model based on the memory-prediction theory of brain function described by Hawkins in his book ‘On Intelligence.’ HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.