Embodied Cognition is a theory stating that the nature of the human mind is largely determined by the form of the human body. Philosophers, psychologists, cognitive scientists, and artificial intelligence researchers who study the ’embodied mind’ argue that all aspects of cognition are shaped by the body, such as high level mental constructs (e.g. concepts, categories) and performance on various intellectual tasks (e.g. reasoning, judgment). These mental processes are limited by physical ones, such as the motor and perceptual systems, the body’s interactions with the environment (situatedness), and the ontological assumptions about the world that are built into the body and the brain.
In social psychology, embodiment is relevant to studies of social interaction and decision-making. According to embodied cognition, the motor system influences our cognition, just as the mind influences bodily actions. For example, when participants hold a pencil in their teeth engaging the muscles of a smile, they comprehend pleasant sentences faster than unpleasant ones, while holding a pencil between their nose and upper lip to engage the muscles of a frown has the reverse effect.
The embodied mind thesis is opposed to other theories of cognition such as cognitivism (information-processing psychology), computationalism (thinking as a form of computing), and Cartesian dualism (the mind as a nonphysical substance). The idea has roots in German philosopher Immanuel Kant and 20th century continental philosophy (such as French phenomenologist Maurice Merleau-Ponty). The modern version depends on insights drawn from psychology, linguistics, cognitive science, dynamical systems, artificial intelligence, robotics, and neurobiology.
Cognitive scientist and linguist George Lakoff and his collaborators (including Mark Johnson) have written a series of books promoting and expanding the thesis based on discoveries in cognitive science, such as conceptual metaphor (the understanding of one idea, or conceptual domain, in terms of another, e.g. quantity and direction as in ‘prices are rising’) and image schema (mental patterns that provide understanding of various experiences by acting as a source domain to provide an understanding of yet other experiences).
Robotics researchers such as Rodney Brooks, Hans Moravec, and Rolf Pfeifer have argued that true artificial intelligence can only be achieved by machines that have sensory and motor skills and are connected to the world through a body. Neuroscientists Gerald Edelman, António Damásio and others have outlined the connection between the body, individual structures in the brain and aspects of the mind such as consciousness, emotion, self-awareness, and will. Biology also inspired English social scientist and linguist Gregory Bateson to develop a closely related version of the idea, which he and his colleagues called ‘enactivism’ (the theory that cognition depends on a dynamic interaction between a cognitive organism and its environment). The motor theory of speech perception proposed by Yale psychologist Alvin Liberman and colleagues at the Haskins Laboratories for research on spoken and written language argues that the identification of words is embodied in perception of the bodily movements by which spoken words are made.
In his pre-critical period, philosopher Immanuel Kant advocated a similar embodied view of the mind-body problem that was part of his ‘Universal Natural History and Theory of Heaven’ (1755). George Santayana, Martin Heidegger, and others in the broadly existential tradition have proposed philosophies of mind influencing the development of the modern ’embodiment’ thesis. The embodiment movement in AI has fueled the embodiment argument in philosophy. It has also given emotions a new status in philosophy of mind as an indispensable constituent, not a non-essential addition to rational intellectual thought. In philosophy of mind, the idea that cognition is embodied is sympathetic with other views of cognition such as situated cognition or externalism. This is a radical move towards a total re-localization of mental processes out of the neural domain.
One embodied cognition study shows that action intention can affect processing in visual search, with more orientation errors for pointing than for grasping. Participants either pointed to or grasped target objects of two colors and two orientations (45° and 135°). There were randomized numbers of distractors as well, which differed from the target in color, orientation, or both. A tone sounded to inform participants which target orientation to find. Participants kept their eyes on a fixation point until it turned from red to the target color. The screen then lit up and the participants searched for the target, either pointing to it or grasping it (depending on the block). Results from the experiment show that accuracy decreases with an increase in the number of distractors. Overall, participants made more orientation errors than color errors. Because orientation is important in grasping an object, these results fit with the researchers’ hypothesis that the plan to grasp an object will aid in orientation accuracy
Internal states can affect distance perception, which relates to embodied cognition. Researchers randomly assigned college student participants to high-choice, low-choice, and control conditions. The high-choice condition signed a ‘freedom of choice’ consent form indicating their decision to wear a Carmen Miranda costume and walk across a busy area of campus. Low-choice participants signed an ‘experimenter choice’ consent form, indicating the experimenter assigned the participant to wear the costume. A control group walked across campus but did not wear a costume. At the conclusion of the experiment, each participant completed a survey which asked them to estimate the distance they walked. The high-choice participants perceived the distance walked as significantly shorter than participants in the low-choice and control groups, even though they walked the same distance. The manipulation caused high-choice participants to feel responsible for the choice to walk in the embarrassing costume. This created cognitive dissonance, which refers to discomfort stemming from a discrepancy between attitudes and behaviors. High-choice participants reconciled their thoughts and actions by perceiving the distance as shorter.
Some researchers extend embodied cognition to include language. They describe language as a tool that aids in broadening our sense of body. For instance, when asked to identify ‘this’” object, participants most often choose an object near to them. Conversely, when asked to identify ‘that’ object, participants choose an object further away from them. Language allows us to distinguish between distances in more complex ways than the simple perceptual difference between near and far objects.
A series of experiments demonstrated the interrelation between motor experience and high-level reasoning. For example, although most individuals recruit visual processes when presented with spatial problems such as mental rotation tasks motor experts favor motor processes to perform the same tasks, with higher overall performance. A related study showed that motor experts use similar processes for the mental rotation of body parts and polygons, whereas non-experts treated these stimuli differently. These results were not due to underlying confounds, as demonstrated by a training study which showed mental rotation improvements after a one-year motor training, compared with controls. Similar patterns were also found in working memory tasks, with the ability to remember movements being greatly disrupted by a secondary verbal task in controls and by a motor task in motor experts, suggesting the involvement of different processes to store movements depending on motor experience, namely verbal for controls and motor for experts.
Some social psychologists examined embodied cognition and hypothesized that embodied cognition would be supported by ’embodied rapport’ (thoughts and actions in sync with another). Embodied rapport would be demonstrated by pairs of same-sex strangers using Aron’s paradigm, which instructs participants to alternate asking certain questions and to progressively self-disclose. The researchers predicted that participants would mimic each other’s movements, reflecting embodied cognition. Half the participants completed a control task of reading and editing a scientific article, while half the participants completed a shortened version of Aron’s self-disclosure paradigm. There was a significant correlation between self-disclosure and positive emotions towards the other participant. Participants randomly assigned to the self-disclosure task displayed more behavioral synchrony (rated by independent judges watching the tapes of each condition on mute) and reported more positive emotions than the control group. Since bodily movements influence the psychological experience of the task, the relationship between self-disclosure and positive feelings towards one’s partner may be an example of embodied cognition.
Embodied cognition may also be defined from the perspective of evolutionary psychologists, who view emotion as a motivator towards goal-relevant action and self-regulation. Emotion, therefore, helps drive adaptive behavior. The evolutionary perspective cites language, both spoken and written, as types of embodied cognition. Pacing and non-verbal communication reflect embodied cognition in spoken language. Technical aspects of written language, such as italics, all caps, and emoticons promote an inner voice and thereby a sense of feeling rather than thinking about a written message.
George Lakoff and his collaborators have developed several lines of evidence that suggest that people use their understanding of familiar physical objects, actions and situations (such as containers, spaces, trajectories) to understand other more complex domains (such as mathematics, relationships, or death). Lakoff and Mark Johnson showed that humans use metaphor ubiquitously and that metaphors operate at a conceptual level (i.e., they map one conceptual domain onto another), they involve an unlimited number of individual expressions and that the same metaphor is used conventionally throughout a culture. Lakoff and his collaborators have collected thousands of examples of conceptual metaphors in many domains. For example, people will typically use language about journeys to discuss the history and status of a love affair, a metaphor Lakoff and Johnson call ‘LOVE IS A JOURNEY.’ It is used in such expression as: ‘we arrived at a crossroads,’ ‘we parted ways,’ ‘we hit the rocks’ (as in a sea journey), ‘she’s in the driver’s seat,’ or, simply, ‘we’re together.’ In cases like these, something complex (a love affair) is described in terms of something that can be done with a body (travel through space).
According to Lakoff, ‘prototypes’ are typical members of a category, e.g. a robin is a prototypical bird, but a penguin is not. The role of prototypes in human cognition was first identified and studied by UC, Berkeley psychologist Eleanor Rosch in the 1970s. She was able to show that prototypical objects are more easily categorized than non-prototypical objects, and that people answered questions about a category as a whole by reasoning about a prototype. She also identified basic level categories: categories that have prototypes that are easily visualized (such as a chair) and are associated with basic physical motions (such as ‘sitting’). Prototypes of basic level categories are used to reason about more general categories. Prototype theory has been used to explain human performance on many different cognitive tasks and in a large variety of domains. Lakoff argues it shows that the categories that people use are based on our experience of having a body and have no resemblance to logical classes or types, which means that traditional objectivist accounts of truth cannot be correct.
The experience of AI research provides another line of evidence supporting the embodied mind thesis. In the early history of AI successes in programming high-level reasoning tasks such as chess-playing led to an unfounded optimism that all AI problems would be relatively quickly solved. These programs simulated intelligence using logic and high-level abstract symbols (an approach called ‘Good old-fashioned AI’). This ‘disembodied’ methodology ran into serious difficulties in the 1970s and 80s, as researchers discovered that abstract reasoning was highly inefficient and could not achieve human-levels of competence on many simple tasks. Funding agencies (such as DARPA) withdrew support because the field of AI had failed to achieve its stated objectives, leading to difficult period now known as the ‘AI winter.’ Many AI researchers began to doubt that high level symbolic reasoning could ever perform well enough to solve simple problems.
MIT roboticist Rodney Brooks argued in the mid-80s that these symbolic approaches were failing because researchers did not appreciate the importance of sensorimotor skills to intelligence in general. He applied these principles to robotics in an approach he called ‘Nouvelle AI.’ Another successful new direction was neural networks—programs based on the actual structures within human bodies that gave rise to intelligence and learning. In the 90s, statistical AI achieved high levels of success in industry without using any symbolic reasoning, but instead using probabilistic techniques to make ‘guesses’ and improve them incrementally. This process is similar to the way human beings are able to make fast, intuitive choices without stopping to reason symbolically.
‘Moravec’s paradox’ is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated at Carnegie Mellon by robotics pioneer Hans Moravec (whence the name) and others in the 1980s. Moravec wrote: ‘Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.’
Many artificial intelligence researchers have argued that a machine may need a human-like body to think and speak as well as a human being. As early as 1950, Alan Turing wrote: ‘It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. That process could follow the normal teaching of a child. Things would be pointed out and named, etc.’ Rodney Brooks made the first real steps towards integrating emodimenting theory and AI. He showed in the 1980s that robots could be more effective if they ‘thought’ (planned or processed) and perceived as little as possible. The robot’s intelligence has to be geared towards only handling the minimal amount of information necessary to make its behavior be appropriate and/or as desired by its creator.
Psychologist Margaret Wilson has boiled embodied cognition down into six main points:
1) ‘Cognition is situated. Cognitive activity takes place in the context of a real-world environment, and inherently involves perception and action.’ One example of this is moving around a room while, at the same time, trying to decide where the furniture should go.
2) ‘Cognition is time-pressured. We are ‘mind on the hoof’ (Clark, 1997), and cognition must be understood in terms of how it functions under the pressure of real-time interaction with the environment.’ When you’re under pressure to make a decision, the choice that is made emerges from the confluence of pressures that you’re under. In the absence of pressure, a decision may be made differently.
3) ‘We off-load cognitive work onto the environment. Because of limits on our information-processing abilities (e.g., limits on attention and working memory), we exploit the environment to reduce the cognitive workload. We make the environment hold or even manipulate information for us, and we harvest that information only on a need-to-know basis.’ This is seen when people have calendars, agendas, PDAs, or anything to help them with everyday functions. We write things down so we can use the information when we need it, instead of taking the time to memorize or encode it into our minds.
4) ‘The environment is part of the cognitive system. The information flow between mind and world is so dense and continuous that, for scientists studying the nature of cognitive activity, the mind alone is not a meaningful unit of analysis.’ This statement means that the production of cognitive activity does not come from the mind alone, but rather is a mixture of the mind and the environmental situation that we are in. These interactions become part of our cognitive systems. Our thinking, decision-making, and future are all impacted by our environmental situations.
5) ‘Cognition is for action. The function of the mind is to guide action and things such as perception and memory must be understood in terms of their contribution to situation-appropriate behavior.’ This claim has to do with the purpose of perception and cognition. For example, visual information is processed to extract identity, location, and affordances (ways that we might interact with objects). A prominent anatomical distinction is drawn between the ‘what’ (ventral) and ‘where’ (dorsal) pathways in visual processing. However, the commonly labeled ‘where’ pathways is also the ‘how’ pathway, at least partially dedicated to action.
6) ‘Off-line cognition is body-based. Even when decoupled from the environment, the activity of the mind is grounded in mechanisms that evolved for interaction with the environment- that is, mechanisms of sensory processing and motor control.’ This is shown with infants or toddlers best. Children utilize skills and abilities they were born with, such as sucking, grasping, and listening, to learn more about the environment. The skills are broken down into five main categories that combine sensory with motor skills, sensorimotor functions: Mental Imagery (visualizing something that is not currently present in your environment), Working Memory (short term memory), Episodic Memory (long term memory), Implicit Memory (learned skills that have become all but automatic), and Reasoning and Problem-Solving.
Leave a Reply