Decision Awareness
In the early 1980s, Benjamin Libet, a neuroscientist at the University of California in San Francisco, conducted a series of pioneering experiments designed to study properties of conscious awareness. By this time it was already known that signals from the motor cortex region of the brain travel through the nervous system to muscles in the body to trigger movement, and that there was a lag between when neurons in the motor cortex fire and when the movement occurs, representing the rate at which signals travel through the nervous system. Libet was interested in what happens before neurons in the motor cortex fire. It was hypothesized that prior to a “voluntary” movement, the decision to initiate the movement occurs elsewhere in the brain, and this eventually culminates in a signal sent to the motor cortex to actually initiate the movement.
Insight into where in the brain conscious decisions are actually made is limited even today. Decision making appears to spark activity all over the brain, suggesting that there is no single “decision-making centre” in the brain. Nonetheless, it would be interesting to measure the lag between when a conscious decision is made and when the motor cortex fires. This timing might give us a better idea of how quickly a conscious decision can be executed, and then we can compare this timing to say, the speed at which non-voluntary responses can happen. There may also be differences in how fast different types of conscious decisions can be made. Such timings may lead to insights into how and where in the brain decisions are made, how much processing is required, and maybe even the steps involved.
The only problem is that while it’s relatively easy to measure the timing of a movement of the body, and straightforward – though admittedly more involved – to measure the firing of neurons in the motor cortex, how can we measure when a conscious decision is made? Libet’s genius was in coming up with a potential solution to this problem: Ask. Now granted, asking subjects when they made a conscious decision may introduce a very large margin of error, but it’s a start. Even a rough idea of the process involved would be better than nothing.
So the experiment was conducted as follows: Participants were asked to do something simple to measure the timing of, such as pressing a button. Electrodes connected to their scalp measured the motor cortex activity that precedes the action. Finally, there was a timing device – subjects were instructed to pay attention to the clock and record when they became consciously aware of having made the decision to perform the action. In order to study conscious decisions, Libet set up the experiment so as to give subjects full freedom to make the decision on their own: There were no time limits, subjects were not prompted by any outside signals, there was no pressure to press the button any given number of times. Subjects simply pressed the button “whenever they felt like it”, of their own volition. This was essentially a “free-will” conscious decision.
The resulting dataset for each run of the experiment then, contained 3 timings: The time at which the subject reported becoming aware of making the decision to move, the time at which the activity in the motor cortex was detected, and the time at which the button was pressed. The lag between the motor cortex activity and the resulting movement was already known from prior experiments; approximately 500ms. What we’re interested in, and what the experiment was designed to measure, was the lag between when the decision is made and the motor cortex fires. The results were somewhat surprising.
Variations of this experiment have been carried out many times since, and in many different labs. The results are consistent with Libet’s findings. Modern experiments can add another timing to the dataset that Libet did not have access to: With the help of fMRI, it is possible to measure the timing of activity all over the brain, and a computer program can then be used to identify brain activity correlated with the decision that consistently predicts it – ie, this effectively measures when and where the actual decision is made in the brain without having to rely on subjective reports.
In the dataset, we have timings corresponding to the measured events in the decision execution process. Suppose that the run starts at point 0ms and the button is pressed at point 1000ms. We already know that motor cortex activity begins at point 500ms, and the decision is made some amount of time before this. What we’re missing is when the subject reports becoming aware of the decision being made. The results consistently place this final point after the motor cortex activity had already began – around point 800ms.
What does this mean? With this result, the process of decision making appears to be as follows: First, the decision is made, then the motor cortex is signalled to initiate the movement, then the subject becomes aware of the the decision being made, and finally the action takes places. One notable consequence of this result is that the conscious mind does not appear to be involved in the decision-making process – it learns of the decision made by the unconscious mind some time after the fact. The fMRI experiments mentioned above have in fact demonstrated that some decisions are made in the brain several seconds (up to 7 seconds in some experiments, and even 10 seconds in others) before subjects become aware of them. This is strange though, as most people believe that their conscious mind does initiate decision making, and not the other way around. There is something profoundly disturbing about the idea that the radiologist watching your fMRI scan can become consciously aware of your decision several seconds before even you become aware of it!
Puppet and Puppeteer
A couple of theories emerge from these results to describe the role of conscious awareness in the decision-making process. I use a corresponding pair of simple analogies to describe them. The first one is called the “computer model” of the brain. In this model, the decision-making part of the brain is represented by the processor of the computer (the stuff inside the computer box), conscious awareness is represented by the monitor (the screen attached to the computer), and the action is represented by the printer (also attached to the computer). In this model, the decision is made in the computer by the processor. After the decision is made, signals are sent both to the screen, and to the printer. The screen receives its signals first, and shows the decision-making criteria and process. Note that although the screen shows the full process leading to the decision being made, the actual decision is not made in the monitor – it is made by the computer, and then reflected on the screen some (short) period of time later. The printer receives its signals last, and executes the decision to print the result of the calculation, completing the decision execution process.
In this model, conscious awareness is a direct, accurate, and complete – though slightly delayed – representation of the unconscious decision-making process. So while our conscious awareness may not actually initiate decisions, it is fully representative of the part of the mind that does.
The alternative model is called the “puppet model” of the brain. In this scenario, consider a puppet (marionette) on a stage. Above the stage, in the balcony, is the puppeteer, holding the strings that manipulate the puppet. The puppet is part of an elaborate play, interacting with other puppets on the stage. Puppets represent characters, with distinct and consistent personalities, and their interactions and dialogue are guided by a storyline. Our puppet is consciously aware of itself and its surroundings – it can see and hear itself and the other puppets around it.
As the puppet moves and talks, it sees itself moving and talking. Accordingly, it identifies itself with the puppet’s physical attributes – “that’s me!” – and with its actions – “I’m the one moving, I’m the one talking”. As its personality becomes apparent by its actions and interactions, the puppet comes to identify its personality – “I’m this type of person, I have this personality”. As its actions are consistent with its personality, the puppet eventually internalizes its actions as its own volition.
If you ask the puppet “who is making the decisions that guide your actions?”, the puppet replies, “clearly, I am the one making these decisions, and initiating these actions.” However, anyone in the audience of the play can see that the one in charge is actually the puppeteer, and the puppet is merely under an illusion of volition brought about by the well executed play and an over-active imagination. In this model, we can see that the decision-making process and action execution are both handled by the puppeteer, and the puppet, representing conscious awareness, is not directly connected to this process at all, but rather learns of it after the fact – much like the audience – and then retroactively rationalizes that it must have initiated its actions on its own through some decision-making process that it internalizes to having performed itself.
The “puppet model” suggests that not only does conscious awareness not initiate decision making, but that it is also completely disconnected from the decision-making process. Under this model, our conscious has no more insight into our own decision-making process than any other person (audience member). Any insight we appear to have is a retroactive reconstruction of the decision-making process that we assume must have taken place. In the Libet experiment: “I see that I pressed the button, therefore I must have made the decision to press the button, and that decision must have been made some time before the button was pressed.”
These theories are both interesting interpretations of the results of the experiments conducted by Benjamin Libet and many others since. But how to test them? What sort of experiment can we conduct to determine which theory might be correct? In 2009, an experiment by William Banks and Eve Isham provided the answer.
In their variation of the original Libet experiment, Banks and Isham added a tone played some time after the button was pressed. The timing of the tone varied in different runs of the experiment, up to 60ms after the button was pressed. So now we have a decision made, motor cortex activity, the subject reported time of awareness of the decision being made, button is pressed, and then a tone is played. The results were again, somewhat surprising.
In order to understand how to interpret the results, it is important to remember the following: Events that happen in the future cannot affect those that happened in the past. That is to say, the timing of events that happen later in the process cannot affect the timing of events that occurred before them. The results were that the later the tone was played, the later the subjects reported their awareness of the decision being made.
Again, events that happen in the future cannot affect those that happened in the past! So how is it possible that a tone played after the button was pressed affected the subject’s awareness of the decision being made? The answer is that the tone must happen before the subject becomes aware of the decision being made. That is to say, the “puppet model” is correct: “I see that I pressed the button, therefore I must have made the decision to press the button, and that decision must have been made some time before the button was pressed.”
Neural Networks
In the Libet experiment, it appears that subjects only become consciously aware of a decision after the action has already been executed. However, how does this finding apply to real-life decision making outside of the laboratory? For example, if I plan a trip somewhere, don’t I know where I’m going to go before going there? Rather than implying that all decision making is completely opaque to us, the “puppet model” simply suggests that there are limits to our insight into our own decision-making process. It would be interesting to learn more about these limits, why they exist, and how they affect the validity of introspection.
When studying complex phenomenon in science, it is often useful to have a model for simulating it. For example, when studying the behaviour of large buildings during earthquakes, it’s much cheaper and easier to set up models of the buildings, and simulate earthquakes using hydraulics. If the simulation behaves the same as the real thing, then we know we have a good model to work with. Studying the way a living brain physically works to make decisions is very difficult to do in a manner that doesn’t risk injury. To some degree, we can study the physical process in animals, but even working with small and simple brains can be very technically challenging and complex. So, starting with the introduction of computers in the 1940’s, neuroscientists have been working with computer scientists to develop electronic simulations of brain functions, called artificial neural networks.
Like their biological counterparts, artificial neural networks are composed of a large number of “neurons”, each with inputs and outputs connected to other neurons. Artificial neurons follow rules, expressed as mathematical functions, for how to convert their inputs into outputs. How accurately neural networks model real brain function depends on how they are interconnected, what rules they use to communicate with each other, and the size and organization of each layer. Since artificial networks were modelled on biological ones from the beginning, we expect them to have similar properties, and experience supports this: Somehow, by simply combining artificial neurons, each of which performs a very simple, well defined function, in large numbers to form neural networks, they spontaneously demonstrate common brain functions such as pattern recognition, learning, problem solving, and fault tolerance. Neural networks have been in use for many years for self-learning pattern recognition applications, such as handwriting and speech recognition, identifying images, reading signs, and detecting obstacles.
Although artificial neural networks have a long way to go in terms of size, speed, and more accurately simulating the real thing, their achievements so far already suggest that scientists are heading in the right direction in simulating collections of neurons such as the brain. In fact, there are projects under way to use such neural networks to simulate real brains, including the human one. One thing that neural networks teach us, no matter what type of network or how well it simulates real networks, is that processing in neural networks is an emergent phenomenon. That is, developing biologically accurate neural networks seems to be a matter of correctly designing neurons, and then organizing them effectively. Take a relatively simple “neuron”, multiply it millions or billions of times, tweak the organization, and magically, it behaves like a living brain.
Another common feature of neural networks of all varieties, artificial and real, is how utterly opaque they are to any sort of insight into their functioning. As with many other types of emergent behaviour, understanding the way each component works is practically useless for predicting their behaviour when multiplied by several orders of magnitude. Accordingly, trying to figure out why a neural network makes a particular decision, is like trying to understand why a school of fish moves a certain direction by examining the behaviour of each individual. Ask a programmer to explain why their neural network misclassified a training input, and they will respond with a statistical analysis and abstract mathematical model that describes the overall system, not a coherent reason like “it thought this R was an A, so assumed the word was gain instead of grin…” The only way to get such an explanation is to look at the result, and then guess.
And yet, this is exactly the task we expect of our consciousness when we introspect. Somehow, we expect it to sort through the seemingly random firings of billions of neurons, none of whom individually represent any sort of abstract concept or step of a decision process, in order to tell us when and why we made a certain decision. Perhaps, in understanding how daunting a task this is, we can see how our insight into our own decision-making process would be limited, and why it is so much easier to infer what happened after the fact.
The Introspection Illusion
The “introspection illusion” is the mistaken belief that people have insight into their own mental processes. We often ask each other questions such as “what would you do?” or “why did you do that?”, “what do you like?” or “why do you like that?”, expecting valid answers. And we usually get answers that the respondents believe to be valid. However, these questions require introspection – insight into internal mental processes that we do not have access to.
Consider a simple experiment suggested by Sam Harris: Think of a movie. Any movie – it doesn’t matter which one, just take a moment to decide which film you want to go with. You have thousands of movies to choose from. Like Libet’s experiment, this is a “free-will” decision – there are no time limits, no external influences, no pressure. Ready?
Now reflect: How did you come up with this particular movie choice? Of all the movies that you know, why did you choose that one? You might have considered a number of possible options, and settled on one of them, perhaps for some reason that you can describe. But what about all the other movies that you know? Did you consider them? You know that The Wizard of Oz is a movie, but did you consider it? If not, then why not? Why did you consider only those few possibilities? Clearly you know of many more movies, but most of them didn’t come up as options. So you might think that you have insight as to why you chose the one particular movie out of a small set of options, but you know that you don’t have insight into how you came up with those limited options out of the entire pool of movies that you could have considered. It just happened.
Another example: In 2008, in a study by Paul Eastwick and Eli Finkel, participants were asked to predict their romantic preferences – what they found attractive in potential mates. The subjects were then followed up on speed dating and other dating behaviour over the following month to see how their actual choices compared with their predictions. The results showed that both men and women have no useful insight into their own romantic preferences that could be used to predict their actual dating choices. A number of other studies examining a variety of methods for matching dating partners using questionnaires, physical traits, personality characteristics, dating techniques, and other theories of compatibility all failed to find a correlation between people’s expressed preferences and actual dating behaviour. Many online dating sites promote their compatibility calculation tools that are based on self-reported preferences, but research so far suggests that none of them do much better than random.
Many more studies in the field of behavioural economics demonstrate that external factors can significantly influence decision making without the subject’s conscious awareness: Information not relevant to the decision, the way that options are presented, subconscious cues, even the temperature of a drink held just prior to making a decision can influence it. The evidence that we have no insight into decision-making processes in our brain is as strong as the feeling that we do when we explain ourselves.
Without effective insight, it seems reasonable to deduce that not only do we not know when a decision was made, but also that we would have no way to know for certain why it was made. Accordingly, if we only become aware of a decision after it is executed, then we can only infer an explanation for it after execution as well. But without insight, how valid could any such explanation determined after the fact actually be?
Choice Blindness
In 2005, a team of researchers from Lund University in Sweden set out to study the relationship between our decisions and the explanations we have for them. Subjects were presented with two photographs of people, and asked to choose their favourite. The experimenter would then place the photographs face down on the table, and slide the chosen one to the subject for a second, closer look. The subject was then asked to pick up the photograph, take another look at it, and explain their choice. This procedure was repeated several times with several different pairs of photos. But on some of the trials, the experimenter used a sleight of hand to swap the photos before handing the “chosen” one (actually the one not chosen) to the subject for review. In the majority of cases, the subjects did not notice the switch – which is not unusual – but what happened next was more interesting.
The participants were, as before, asked to explain why they chose a photograph that in fact they did not choose. And explain they did: With no apparent hesitation, lack of confidence, or other notable reaction, subjects provided an explanation for their “choice” that was equally pertinent, convincing, and realistic, even when it contradicted their actual decision. For example, a male subject might be given a photograph of a blonde woman, and explain that he selected it because he prefers blondes, when in actual fact, he chose the other photograph – of a brunette. It seems that in taking a closer look at the photograph, subjects infer that they must have chosen that one, and then come up with an explanation for it.
Other research into “choice blindness” confirms these results: Like the Libet experiment, these studies support the conclusion that subjects lack insight into their own decision-making process, that they learn of their decisions only after they are executed, and that they retroactively rationalize that they must have made a decision after the fact. Additionally however, these studies demonstrate that reasoning about the decision-making process also comes about after the fact.
In some ways, these results may be even more disturbing than the Libet experiment. Not only do we not know when we make decisions, or have any insight into the decision-making process, but even our reasoning about our own decision making is confabulated. It seems as though we can make any arbitrary decision (or be made to believe that we did), internalize it as our own, and come up with an explanation for it no matter what it was. However, again, as before, it is not clear how these results extend outside of the laboratory; they simply suggest that there are significant limits to the validity of our reasoning.
Rationalization is a similar phenomenon, in which people confabulate reasons for their actions after the fact. The key characteristic of rationalization is that reasoning is “defensive”, that is, logical sounding reasons are provided for irrational or unacceptable behaviour to reduce conflict and stress, without insight into its true explanation. Minimisation, intellectualization, and sour grapes are also varieties of this. Here again, the validity of reasoning is questionable.
It appears then that under a variety of contexts, our conscious mind readily provides post-rationalized explanations for decisions – even ones not actually made – without insight into our internal mental process, without knowing the true reasons, and yet, without any doubt of accuracy.
Confabulation
Post-rationalizing is a process that involves observing one’s behaviour, inferring that it was self-initiated, and then generating an explanation for it. This process is seamless, subconscious, fast, convincing, and apparently indistinguishable to us from true insight. This seamlessness suggests a built-in, integrated mechanism in the intuithuman brain for filling in missing details after the fact as if they were extracted directly from internal knowledge.
Long-term memory recall is another function of the brain that seems to make use of this capability. When specific memories such as episodic and visual memories are encoded, the content of those memories is limited by attention – only parts of an overall experience are stored. Later, we may be called upon to retrieve information from memory that was not necessarily paid attention to and specifically retained, or may have been forgotten since. During recall, the stored information is extracted, and missing information is automatically filled in from other sources. Those other sources include information available prior to the formation of the memory such as general knowledge and other memories, and information added after the memory was encoded such as later experiences and new memories. The process of reconstructing memories from a combination of incomplete fragments in storage and other filler information to generate complete, vivid memories is fast and seamless to us.
Consequently, memory recall is subject to the same illusion of accuracy and validity as introspection into decision making. Since missing details are filled in after the fact from sources that are independent of the stored memories in a manner that is indistinguishable to us from actually stored memories, we have no way to tell apart original memories from ones confabulated later. Additionally, those other sources of information may change from the time the original memory was formed, resulting in false memories being recalled. This phenomenon of confabulation or the misinformation effect is a significant concern with eyewitness testimony. It can also be effectively modelled by artificial neural networks. And memory recall is not the only function of our brain that is subject to confabulation.
Matthew Wilson, a neuroscientist at MIT, conducted a landmark experiment published in 2001, on the relationship of memory and dreams. Special electrodes capable of monitoring the activity of large numbers of neurons were implanted into the brains of rats in areas that are associated with memory formation. The rats were then monitored while exploring a maze, and again later while they slept. The data produced patterns of neuron activity that were recorded in a graphical form as the rats explored each part of the maze. Each pattern graph was so distinct that by looking at it, it was possible to predict what part of the maze the rat was in when it was recorded. But more interestingly, brief flashes of identical patterns were also recorded while the rats dreamt during REM and slow-wave sleep.
These results have been reproduced and support the idea that dreams play a role in memory consolidation, as the rats appear to rehearse their activities during the day in their dreams at night. However, during the night, the patterns appeared in brief flashes, disjointed, fast-forwarded, and sometimes in reverse order. Additionally, only some patterns appeared in the dreams, usually representing the salient parts of the maze – the main intersections, interesting smells, major obstacles – rather than the boring parts such as long hallways, and monotonous regions. This matches the way that memories are based on attention, are not complete reproductions of experience, and are reconstructed from a combination of original memories and other sources.
Research into human dreaming also shows that dream content is heavily influenced by recent activities, that dreams play an important role in memory consolidation, and that dreams often feature disjointed imagery that reflects salient parts of experience. Unlike rats however, humans can also report dream content verbally. Despite the disjointed imagery, subjects usually describe dreams as complete, cohesive (albeit bizarre), storylines. They only become aware of the absurdity of the dream when they wake, implying that confabulation happens at dream time, and not afterwards during recall. Once again, the brain automatically fills in missing information, and invents a convincing narrative to support it.
Choice blindness studies show that we can internalize arbitrary decisions and seamlessly post-rationalize explanations for them; long-term memory recall indicates that we can retrieve portions of experience and reconstruct vivid memories around them; and dream analysis suggests that we can experience randomized sequences of images and automatically connect them into a coherent story. All these processes take bits of disjointed information and seamlessly fill in the gaps with apparently “made-up” but convincing material that we cannot distinguish from original internal knowledge. Our brain appears to be quite good at this.
Self-Knowledge
A major predicate of post-rationalization – that explanations follow behaviour rather than the other way around – is that the reasons given for decisions made have no more insight than an outside observer. Subjects have no difficulty confabulating elaborate, convincing explanations for their decisions without any need for access to private information. So how well do post-rationalizations demonstrate individuals’ self-knowledge? Let’s take a look at a well-known example.
Polls conducted over several years consistently show 80%-90% support for organ donations amongst Americans. However, actual organ donor registrations are less than 40% in the United States. What could account for this vast discrepancy? Interestingly, there are several European countries with nearly ubiquitous consent rates – are there significant cultural attitude differences between these countries regarding organ donation? In comparing countries with low (<30%) registration rates to countries with high (>98%) registration rates, Eric J. Johnson and Daniel Goldstein propose in their 2003 research paper, that a key difference between them is the registration form itself: Low registration countries use an “opt-in” strategy (“check this box if you wish to register as an organ donor”), while high registration countries use an “opt-out” strategy (“check this box if you do not wish to register as an organ donor”). The authors conducted an on-line survey with some opt-in and some opt-out options, and the results support this hypothesis – even with such an important issue, participants often just go with the default option.
To investigate this phenomenon in more detail, a research team led by Thomas H. Feeley of the University at Buffalo in New York, interviewed customers exiting DMV offices about their donor registrations, to find out if they registered as organ donors, and why. The results, published in 2014, were very interesting.
Reported registration rates among participants were less than 40%, in line with the national average. Participants readily provided a variety of reasons for not registering. Remarkably, the “real” reason – that they just left the default status as-is – was not a common explanation. Instead, about 55% of unregistered participants gave reasons such as “I’m not sure”, “I haven’t decided”, or “I didn’t see the box”, suggesting that in fact the default option was a primary decision factor. Many other reasons were cited as well, such as “I don’t believe in it”, “I don’t feel safe”, and “I don’t trust the system”. The authors speculate that some of the uncertain participants may in fact have had reasons that they did not feel comfortable to disclose. However, whether uncertain or not, these findings do not reflect Gallop poll results, where most Americans express an opinion, have given the matter some thought, and generally support organ donation. It seems that the vast majority of participants are not aware of the actual cause of their decision, and instead confabulate a rational sounding reason for their choice.
The authors also speculate that where unregistered donors are unsure of their actual reasons for not consenting, there may be a kind of “void” that is filled in by reasons derived from other sources, such as news and entertainment media, myths and misconceptions. As already discussed, it is not unusual for subjects in a variety of different experiments to be unaware of many of the factors involved in their actual decision-making processes, and it is also not unusual for them to post-rationalize completely different reasons for their decisions. Additionally, subjects appear to have no difficulty confabulating such reasons from a variety of other sources, so that the explanations given sound logical and convincing.
Taken together, the research reviewed so far suggests that there are two separate processes in the brain: An unconscious decision-making process that happens first, and a conscious post-rationalizing process that happens after the fact. The processes appear independent – the post-rationalizing process may not be aware of or connected to the unconscious process in any direct way. Furthermore, the unconscious decision-making process may not involve the same rational deductions as the confabulated one, and in fact, as suggested by research on neural networks and decision making in general, it may not involve logical reasoning at all.
Pre-rationalization
A bizarre but revelatory phenomenon happens when subjects are asked to reason about their decisions before making them, rather than after. Timothy Wilson, a professor of cognitive psychology at the University of Virginia has led research on the quality of such reasoning in a number of different experiments.
For example, in one study, students were asked to predict how they would behave when meeting an acquaintance, and were then observed when the meeting actually took place. The subjects were divided into 2 groups: The control group simply gave their predictions, while the experimental group was asked to provide reasons for their predictions before making them. The results: The students who provided reasons for their predictions made different predictions about their behaviour, and their predictions were less accurate. In a similar study, students were asked to select a poster to take home, and then followed up a few weeks later to find out how happy they were with their selection. Again the subjects were split into 2 groups: The control group simply chose their preferred poster, and the experimental group was asked to provide reasons first. The results: Again, students who provided reasons for their choice made different choices – preferring a different style of poster – and their satisfaction levels at follow-up were significantly lower. Another example involved students predicting the future of their current romantic relationships. Again, those who provided reasons for their predictions before making them were less accurate about the outcomes when followed up on a few months later.
These results are consistent across a wide variety of scenarios, yet they are quite puzzling… Why would the simple act of providing reasons for a decision change it, and not only that, but make it less accurate? Surely subjects are already reasoning in their head about their predictions before making them, so why would reporting their reasons affect them? The answer is that first of all, subjects do not reason about their decisions before making them – as we have already seen, they reason about them afterwards. Second, without insight, their reasoning is inaccurate, leading to different conclusions than what happens without thinking about it. For students choosing posters, reasoning leads to poor decisions that do not correctly reflect their personal preferences – selecting the rationally better posters instead of the ones they prefer – resulting in less satisfaction with the selected posters.
The curious part of this is that these subjects are better at predicting their own behaviour and preferences when they don’t think about it. If you’ve ever heard anyone say “go with your gut” or “don’t over-think it”, then now you have some examples where that would be good advice. However, more importantly, these experiments lend support to the idea that reasoning is not normally involved in decision making. Not only that, but the results suggest that logical reasoning may not adequately describe the decision-making process at all. That is, the process by which decisions are made in the brain does not follow any logical rules, rather it is more like a statistical analysis as eluded to by the way that artificial neural networks make decisions.
Using a different approach than the above experiments, Dutch researchers Ap Dijksterhuis and Loran Nordgren investigated the effect of complexity on the quality of decision making. In their experiments, published in 2006, subjects were asked to choose from a list of possible purchases, such as cars or apartments. Participants were given descriptions of several cars, in the form of pros and cons, and asked to select the “best” one. Most of the cars had an equal number of pros and cons or more cons than pros, but one of the cars had 75% pros and 25% cons, making it the correct choice. The subjects were divided into 2 groups: In the first group, they were given a few minutes to decide, and asked to give their decision some thought. In the second group, subjects were given the same amount of time to decide as the previous group, but instead of thinking about their choice, they were given some other task to perform as a distraction – playing a puzzle game for example. The assumption here is that the first group is able to use conscious deliberation to decide, while the second group relies on their unconscious decision-making faculties.
The results add a new dimension to the previous findings: When the total number of pros and cons is relatively large (12), unconscious decisions are significantly better than conscious thought – consistent with the above research. However, when the total number of pros and cons is small (4), conscious thought has the edge. Additional research into the effect of reasoning on the quality of decision making indicates that in some cases taking time to consciously think through a decision improves the quality of the result, in other cases unconscious decision making produces better results, and in yet other cases it makes no difference. These findings mirror results in artificial intelligence research: In some cases, a knowledge systems (rule-based) approach is favoured, while in other cases a neural networks (statistical/heuristics) approach is preferred. For example, logical reasoning works well when a decision involves a manageable set of rules. However, some problems are not amenable to rule-based decision making because the rule set would be impractically large, and here neural networks have performed much better. Of course, computers are much better than humans at rule-based decision making, while currently being vastly inferior in neural network performance. Consequently there are many more instances where humans should prefer non-reasoned decision making, while for the moment, computers are used more frequently in rule-based decision-making tasks.
Whether conscious deliberation or unconscious analysis is better for decision making in any given instance, the bottom line is that clearly these processes are very different. The consequence of this is that pre-rationalizing potentially produces different results from normal decision making, and post-rationalizing can only at best approximate the true underlying decision-making process.
Machine Learning
Rationalizing, whether before or after decision making, is the process of constructing logical sounding reasons for decisions. As we are under the illusion of being able to introspect, the implication here is that unconscious decisions are made in a manner that may be described logically using a set of rules that can be articulated with natural language. However, as we have seen, the underlying, unconscious decision-making process may not be a rational one at all, not involving any rules that we can articulate, and therefore post-rationalizing cannot accurately describe it. How then are decisions in fact made?
In natural language, we often refer to “entities”, such as objects or living beings. The definitions of terms referring to entities are expected to be rule-based; for example, “a triangle is a three-sided polygon”. Similarly, there ought to be a set of rules that describe a cat for example, as distinct from other entities. The view that word definitions are based on a set of rules is part of essentialism – the idea that entities have attributes, some necessary and some optional, that describe them and differentiate them from other entities.
While triangles are straightforward to define and identify, the rules for identifying more complex entities such as cats are harder to come by. Take a minute to think of the rules that identify cats – especially for differentiating cats from other animals. Cats have whiskers for example, but so do many other animals; cats don’t have wings, but not all animals that don’t have wings are cats. Then again, a cat with wings is still a cat, as is a cat without whiskers, so these rules don’t seem to address the true “essence” of a cat anyhow. While taxonomists may know the technical description of cats, most people do not, but yet are very competent at differentiating cats from other animal species. In fact, people are very good at identifying many complex entities, such as objects, materials, substances, plants, animals, behaviours, and individual humans – but usually without referring to an explicit set of attributes. How can people do this so well without rules?
The problem of identifying complex entities, such as cats, handwritten letters, or obstacles on the road, has been a significant challenge for logic-based systems such as computers. Typically, two general approaches are considered for such complex decision making. The first approach, called a knowledge-based (or expert) system, uses an extensive set of intricate rules to make decisions such as differentiating entities. The second approach, neural networks, uses a statistical or heuristics based learning mechanism that emulates the way the brain works. Despite artificial neural networks being vastly inferior to their biological counterparts, their success with discriminating abstract entities greatly outperforms that of knowledge systems.
Neural networks are composed of large numbers of emulated neurons that are interconnected. Artificial neurons are simple entities that behave in an autonomous manner. They accept input signals, perform some calculations to weigh those inputs, and then decide whether or not to “fire” output signals, passing their outputs as inputs to other neurons that they are connected to. Additionally, neurons have a “memory” or “learning” function: Past input patterns affect future decisions about whether or not to fire, represented by the strengthening or weakening of their connections to one another. The strengths of connections denote the statistical likelihood that a set of neurons firing will trigger any of the neurons that they are connected to to fire as well. When a neural network is fed some input signal pattern, a cascade of events takes place where some neurons fire, feeding signals to other neurons, then some of those neurons fire, and so on, eventually culminating in an output signal pattern representing the final decision. As more input signal patterns are fed into the network, the strengths of connections between neurons change, and the pattern of firings changes, based on the “learning” function of the neurons.
It is important to recognize that individual neurons are not specialized in any way, do not have a deliberate role in the decision-making process, and do not attribute any special “meaning” to the signals that they process. Like the cogs and gears of a very large clock, they simply do their small part to contribute to the overall result. The magic of the network of neurons is that in large numbers, a simple algorithm multiplied millions or billions of times, results in a powerful decision-making system.
Consider a neural network that is tasked with identifying photos of cats out of a photo album containing many different animals and other entities. The first step in the process is “training”, which consists of providing the network with examples. The neural network is fed an input signal pattern for each sample photo, representing the digital image (pixel) data of the photo, as well as a feedback signal that indicates whether or not the photo is of a cat. As many photos are fed into the system, the strength of connections between neurons changes accordingly, such that cat photos are more and more likely to fire the output pattern indicating a match, while non-cat photos are increasingly likely to fire the output pattern indicating a mismatch. Once the success rate of the network reaches an acceptable threshold, the second step can be initiated: New photos the network has not been exposed to before are fed to it, this time without a feedback signal, to see what it decides on its own.
What is interesting about the way that neural networks make decisions is that individual neurons do not implement any sort of explicit rules that we can follow. There is not necessarily a neuron or set of neurons that check for “whiskers” or “wings” or any other attribute that defines a cat or non-cat. Instead, the system is defined by a large matrix of statistical probabilities (strengths of connections between neurons) that determine the output for any given input. That is to say, neural networks do not make decisions such as identifying entities based on an explicit set of the types of deductive or inductive rules that we can readily understand or articulate.
Artificial neural networks provide us with a model of how the unconscious mind operates. Assuming that our brain in fact works in a manner that is analogous to machine learning by neural networks, it follows that the idea of essentialism – that entities are defined by a set of attributes – is incorrect in practice, and similarly, unconscious decision making is also not rule-based as our post-rationalizations imply. In other words, not only is post-rationalization invalid because we lack insight into our decision-making process, but also our post-mortem attempts to guess at it are futile because our decision-making process cannot accurately be described in a rational rule-based way.
Confirmation Bias
Post-rationalized explanations may differ from the actual underlying decision-making process, and therefore lead to poor predictions or necessitate changes in behaviour to compensate. So why is it that we do not normally notice the inconsistency between our decisions and explanations? How do we remain convinced that our explanations reflect genuine private knowledge of our brain’s inner workings despite constant failure? In other words, how is the introspection illusion so effective?
The answer lies in the way that we naturally defend our existing beliefs. Our unconscious mind deploys a host of strategies – called confirmation bias – to protect certain beliefs from scrutiny, especially those about the self. The first line of defence is simply not looking – by preferring reinforcing evidence, and avoiding contradictory information, our views can easily be maintained and even strengthened. When unfavourable information cannot be avoided, it can just be ignored, and only desirable data noticed. And when information cannot be ignored, it can still be forgotten, while confirming evidence is preferentially remembered.
What happens when we are forced to confront evidence that potentially conflicts with our views? When we cannot avoid, ignore, or forget what we don’t want to face, then a second line of defence kicks in: We misinterpret. In examining evidence for and against a complex issue, we naturally pay more attention and give greater weight to details supporting our existing point of view, and downplay opposing views – ironically labelling them as invalid or biased. Finally, even when new information is accepted as valid and results in attitude change, a third line of defence, selective memory, can distort recollections, effectively restoring the original belief.
However beliefs are initially formed, some are held so dogmatically that changing them even with the help of strong evidence, can be a daunting task. And none are held more dearly than those about the self. For every strategy used in the defence of beliefs, there is a special version for defending self-image. The strategy of avoiding, ignoring, and forgetting unfavourable information, has the special term “self-verification” when used to protect self-image. The misinterpretation strategy goes under the name “self-justification“, and selective memory is called “mnemic neglect” when applied to the self.
A series of experiments on self-verification by William Swann and Stephen Read from the University of Texas illustrates how confirmation bias works to defend self-image in real-life situations. In the first experiment, female students were assessed using a questionnaire about their self-image and views on some controversial topics. Later, they were given written feedback on their views ostensibly from a male student that had read over their questionnaire, and were told either that the male student mildly liked them, or mildly disliked them overall. In actual fact however, the participants were randomly assigned to the mildly “like” or mildly “dislike” groups, and the “feedback” was a pre-determined even mix of generic neutral, positive, and negative statements, the same for everyone, not reflective of the male student’s supposed overall opinion. The researchers then simply measured the amount of time that each participant spent reading over the feedback. The results were that the students who had rated themselves “likeable” overall in their questionnaire spent significantly more time reading feedback they believed came from a male student that “liked” them than someone that “disliked” them, and students who rated themselves “dislikeable” overall spent more time reading feedback from a male student that “disliked” them than one that “liked” them. That is, subjects spent more time attending to feedback they believed confirmed their pre-existing view of themselves.
In the second experiment, male students were observed interacting with female students in their first encounter. Prior to the conversation, the male students were told the impression that the female student they were about to interact with had of their (anonymous) questionnaire. Again, in reality, the male students were assigned randomly to favourable, unfavourable, and neutral (control) groups that had nothing to do with the female students’ actual opinions. The conversations were taped and reviewed. Consistent with the confirmation bias, male students who rated themselves “likeable” elicited more compliments from female students they believed to have an unfavourable impression of them than did students who believed their partner to have a favourable impression. Similarly, students who rated themselves “dislikeable” elicited fewer compliments from female students they believed to have a favourable impression of them. In other words, students worked hard to verify their pre-existing self-conceptions. The authors noted one method by which this was accomplished: Male students trying to elicit compliments from females they believed to have an unfavourable opinion paid more compliments in hope of reciprocation. The researchers also asked the male students to rate their partner’s impression of them after the conversation. The ratings reflected both their pre-existing self views, and their pre-existing belief about their partner’s view of them. That is, students who believed themselves to be “likeable” rated their partner’s impression higher than students who rated themselves “dislikeable”, and students in the favourable partner group rated their partner’s impression higher than those in the unfavourable partner group, regardless of their partner’s actual impression, which was in fact the opposite of this! Again, pre-conceived beliefs persisted.
The third and final experiment of this series was almost identical to the first experiment. Again, participants were randomly assigned to “like” and “dislike” groups reflecting the opinion that a male student supposedly had of their questionnaire. However, this time the subjects were given oral feedback in the form of a recording of the male student. As before, the feedback was the same for everyone, and consisted of an even mix of generic neutral, positive, and negative statements. After listening to the “feedback”, the students were given a 5-minute distraction task, and then asked to recall as many of the feedback statements as they could. Once again, in line with the confirmation bias, subjects recalled more statements that were consistent with their self-views – eg, students who rated themselves likeable recalled more positive statements. Additionally, subjects recalled more statements (positive and negative) when they believed that the feedback matched their self-views – eg, students who rated themselves dislikeable recalled more statements when made by someone they believed to have an unfavourable view of them.
A commonly cited example of confirmation bias is in drug addiction, such as cigarette smoking. Smokers express a variety of confirmation bias post-rationalizations to account for their highly irrational behaviour. For example, smokers underestimate their risk of cancer relative to both fellow smokers and non-smokers. People who tried to stop smoking but failed, start to think that smoking is not as harmful as they previously thought. Smokers may have a variety of post-rationalized reasons for their behaviour that defy evidence, such as social expectations – in spite of the tremendous social disapproval of smoking; mental health – though nicotine is actually shown to increase stress rather than decrease it; and lifestyle balance – even though smoking diminishes the efficacy of exercise. In actual fact, smoking is an addiction, a behaviour resulting from nicotine’s biological effect on the brain. All other explanations, aside from being largely factually incorrect, fail to acknowledge the true cause. That is, smokers, rather than admitting that they have no insight into or control over the actual origin of their decision to smoke, explain it as a rational, intentional, conscious decision, disregarding all evidence to the contrary.
The confirmation bias has been extensively tested, and this phenomenon is found to be remarkably robust across a wide variety of situations. Our unconscious mind appears to be quick to come to the aid of our conscious mind, defending its assertions about its role in decision making, protecting it from discovering its own delusions about itself.
Attitude Change
Stubborn as we are, it is not impossible to change our views however, even about ourselves. When confirmation bias fails to protect us from contradictory evidence, our brain deploys one final defence mechanism to protect the introspection illusion: We change our mind.
As already alluded to, self-image is not formed introspectively. Rather, beliefs about the self are initially formed and gradually modified through social interactions and self-perception: We collect feedback from others, and use our own observations of ourselves to infer our self-image. How easily such beliefs are then modified by further evidence depends in large part on their maturity – as supporting evidence is collected, they become more rigid.
Psychologists Lee Ross and Mark Lepper of Stanford University conducted several experiments that demonstrate how attitude formation works. In one example, female students were asked to perform a novel task – something they had never done before – and then given fake (random) feedback about how well they did. Their beliefs regarding their competence were then surveyed and found to be naturally correlated with the feedback they received. This is an example of how an aspect of self-image initially forms. The subjects were then debriefed about the experiment – it was revealed to them that the feedback was entirely fictitious. The students were surveyed again, and although they retracted their beliefs to some degree, in accordance with the confirmation bias, some of the initial belief was retained. This demonstrates how attitudes can become more difficult to change over time.
An interesting aspect of attitude change is how seamlessly it can happen – without any conscious awareness. In a 2001 study by Linda Levine and colleagues, subjects were surveyed as to their reactions to the verdict of the O.J. Simpson trial, 1 week, 2 months, and 1 year after the verdict. Additionally, they were surveyed as to their recollection of their initial reactions. Although the reported emotions of the participants regarding the verdict were found to change significantly over time, they did not appear to notice this change – when recollecting their initial reactions, they believed that their initial reactions were more similar to their current feelings than they actually were. Attitude change had taken place, but went largely unnoticed.
A large body of research further demonstrates the effect of observation of one’s own behaviour on attitude formation and modification, perception of emotion, and social influence. For example, many independent studies on the effects of volunteering on self-image show that compared to control groups (non-volunteers), involvement in community services gradually changes the participants’ self views of their own empathy, concern for others, social responsibility, and self-esteem.
Attitude change can be remarkably seamless – subjects readily adopt a new attitude, and behave as though it had been consistent all along. It is this seamlessness that makes attitude change such an effective last line of defence for the introspection illusion. Without introspection, predictions about the self are inevitably flawed, and confirmation bias can only hide those flaws for so long. Seamless attitude change means that even when discrepancy is blatant, it is still not necessarily recognized as a failure of introspection.
It seems that not only is it easy for us to confabulate convincing sounding post-rationalizations for our unconscious decisions without insight, but it is also easy for us to ignore or compensate for the sometimes obvious discrepancies between explanation and decision. What is particularly interesting about post-rationalizing and defence mechanisms is that the mind can make all manner of nonconscious decisions, with no conscious input at all, and yet, the conscious mind has no trouble confabulating explanations for it as if the decisions were made consciously, defending those explanations against counter-evidence, and failing all that, simply changing its story to match actual behaviour. Imagine for a moment, a self-aware puppet, controlled by a puppeteer of whom it is not aware. The puppet believes that it is the one making decisions and in control of its own behaviour. It uses its purported self-knowledge to explain and predict its behaviour. When predictions fail – its behaviour is inconsistent with expectations for example – it ignores this, misinterprets it, forgets that it happened, or seamlessly changes its narrative, and corresponding expectations, thereby maintaining its illusion of insight and control, when in reality it has neither.
The Split Brain
The idea that our brain contains two independent parts – an unconscious mind that makes decisions, and a conscious mechanism that post-rationalizes those decisions – was explored by Michael Gazzaniga, a cognitive neuroscientist at the University of California, Santa Barbara. In case studies conducted throughout the second half of the 20th century, Gazzaniga and others interviewed patients who had undergone a corpus callosotomy – a radical surgical procedure that severs the connection between the brain’s two hemispheres in order to treat epileptic seizures. Although most of the (very rare) patients that had undergone this procedure before it was obsoleted by less invasive alternatives were only considered for the surgery because they already had significant cognitive impairments, a few were cognitively healthy enough both before and after the operation to be useful research subjects.
Remarkably, split-brain patients in general report no significant post-operative changes in their cognitive function, sense of self, or conscious experience. However testing under lab conditions demonstrated that subjects do in fact experience significant changes in awareness – effectively neglecting half of their perceptual experience.
In the typical experimental paradigm, patients are presented with an image to their right visual field, processed by the left hemisphere of the brain, and are able to identify the stimulus without difficulty. However, when presented with a stimulus to their left visual field – processed by the right hemisphere – they report not seeing anything. Patients are next instructed to indicate the image presented to them using their left hand – by pointing to it, selecting a matching image from a list, or identifying a physical object that matches it based on tactile sensations. Although they are unable to verbalize their perception and report not seeing the image, they are nonetheless able to identify it correctly with their left hand.
These experiments were instrumental in establishing the latelarization of brain function, especially with respect to language, that is primarily localized to the left side of the brain. The right hemisphere, lacking speech production function, is unable to verbalize what it sees, but is nonetheless able to understand instructions and indicate words and images using the left hand.
Gazzaniga next set up an experiment to create a choice blindness situation: Subjects were presented with different images to each visual field – eg, a chicken to the right side, and a snowy field to the left. They were then instructed to choose an associated word from a list, using their left hand. The left hand, controlled by the right hemisphere, that sees the snowy field, chose the word “shovel”. Then, the left hemisphere, that sees a chicken, and has verbal function, is asked why they chose a shovel. As with other choice blindness experiments, patients confabulate a reason based on what their left hemisphere sees: eg, “The shovel is for cleaning out the chicken shed.” In a similar experiment, the patient’s right hemisphere was shown the word “smile”, while the left hemisphere was shown the word “face”, and then the patient was asked to draw what they saw. After drawing a smiling face, the patient was asked why, and responded: “What do you want, a sad face? Who wants a sad face around?”
Such results led Gazzaniga to postulate that post-rationalization is affiliated with language function, and is localized to the left side of the brain. That is, while both hemispheres are capable of unconscious decision making, it is the left brain that is the “interpreter” or “rationalizer”, that – physically disconnected from the right brain and hence incapable of any genuine insight – nonetheless has no trouble confabulating explanations (often incorrect) for its counterpart’s behaviour. Similar results were obtained for confabulation of memory, and emotional state – the left hemisphere consistently interprets what the right brain does independently, as though the two hemispheres remain a united whole.
Most results in neuroscience are “fuzzy”, and this finding is no exception: Although language function is lateralized to the left side of the brain in most people, it can be much more evenly distributed between the hemispheres in other cases. Nonetheless, these case studies support other evidence discussed that a physical connection is not required between the part of the brain that makes a decision, and the “interpreter” part that post-rationalizes an explanation for it. The two parts can and do operate independently.
Adaptive Unconscious
At this point, it is worth restating that although our focus thus far has been on two particular, and seemingly independent functions of the brain, this is not meant to imply that all decision making is unconscious, nor that the only function of consciousness is to interpret the decisions of the unconscious mind. As already noted, there is another system for decision making, that of conscious deliberation (referred to earlier as pre-rationalization), that is another important function of the conscious mind. Thus, the conscious mind is not, strictly speaking, just a puppet of the unconscious – it is also capable of influencing behaviour. But how important is the role of consciousness in decision making compared to the unconscious?
The idea that decision-making processes are divided into two different systems is a common theme in cognitive psychology, and is given the umbrella term of dual process theory. Many subtly different dual systems theories have been proposed, but what they all have in common is a division between unreasoned (unconscious) decision making and reasoned (conscious) decision making. The unconscious process (system 1) is typically described as fast, automatic, and heuristic, while the conscious process (system 2) is typically described as slow, effortful, and deliberate. To our knowledge, all animals make decisions using nonconscious processes, while humans are unique in having a conscious reasoning system. As noted earlier, the capacity of this reasoning system is limited – a concept known as bounded rationality – and is the reason for the superior performance of unconscious decision making in many complex problems.
Due to the slow, effortful nature, and small capacity of conscious reasoning, most day-to-day decisions are inevitably handled by system 1. Much research has gone into investigating under what circumstances system 2 is used, and what has been found is that even when the circumstances permit it – there is enough time, patience, and capacity available – there are many situations under which system 2 is still not used. For one thing, logic and reason appear to be learned – during adolescence – and hence children make almost all of their decisions using system 1 alone.
Earlier, we saw how researchers elicit pre-rationalization from adult subjects, simply by asking them to “think”. Those same experiments showed that without this prompt, people often tend to make unreasoned decisions by default. To further reduce the likelihood of voluntary reasoning, many experiments add a distraction task – the assertion being that analytical thinking is not possible when distracted. In fact, conscious thought is easily thwarted using distractions, brainwashing, emotion, cognitive biases, priming, pain, confusion, overloading, and many other factors. Voluntary use of reasoning appears to also be influenced by personality (thinking style), prompting (using words, imagery, and even different fonts!), learning / training, and culture (such as religious affiliation). All these factors suggest that conscious reasoning can be a relatively rare phenomenon.
Potentially even more significant, several dual process theories sub-divide reasoning (pre-rationalization) into two types: A deliberate conscious process, and an intuitive automatic process that can easily be mistaken for conscious reasoning. For example, hot cognition theory proposes a conscious reasoning process that is influenced by emotion, leading to less rational decision making; unconscious thought theory and mind-wandering suggest a slow, deliberate, unconscious process of reasoning that can lead to better decisions in some circumstances; and fuzzy trace theory argues for an intuition-based reasoning process that develops in adulthood, and is used by experts in decision making. These types of reasoning are seamlessly masked (by post-rationalization) to appear as though conscious reasoning has taken place, when in fact, it is the adaptive unconscious mind driving decisions. This phenomenon is similar to various examples described in earlier sections of decisions believed to be “free-will” conscious decisions, that turned out to be made by the unconscious mind as well.
The adaptive unconscious, together with the “interpreter” of post-rationalization, form the mechanism that underlies the introspection illusion. This illusion pervades much of our decision making, including much of what we falsely believe to be consciously deliberated. But although our focus has been on decision making, the introspection illusion is hardly limited to this function: It also clouds many of the cognitive functions that factor into truly conscious decision making.
Emotion
As already alluded to, memory, dreaming, preferences, self-image, and other mental processes are also subject to the introspection illusion. Let’s consider another notable example critical in decision making: Emotion.
A particularly well known experiment was conducted by Donald Dutton and Arthur Aron of the University of British Columbia in 1974 on the Capilano suspension bridge in Vancouver, BC. Male passersby on the suspension bridge and pedestrians on a nearby solid bridge were interviewed by an attractive woman after crossing. They were given a written test (called the Thematic Apperception Test) that measures arousal state, and then after completing the test, the woman told the subjects that she would be available to answer any questions about the study. She then gave the men her name and phone number. The results revealed that men who had just crossed the suspension bridge were more sexually aroused while writing the test than those who crossed the solid bridge. Additionally, over the next few days, many more of the suspension bridge subjects called the number provided than solid bridge controls. Other experiments were conducted to eliminate alternative explanations. So why the difference in results?
The researchers noted that men who just crossed the suspension bridge were physically aroused – they had a faster heartbeat, shortness of breath, more fatigue, etc – typical symptoms of fear / exhilaration. Though the men crossing the solid bridge should have found the same woman equally attractive, they did not have the same physiological symptoms of arousal. It seems that participants on the suspension bridge misattributed their physical arousal to sexual attraction instead of fear.
These results are in line with most modern theories of emotion: Emotion is context or perspective-based. Instead of men seeing an attractive woman, becoming sexually aroused, and then experiencing physical symptoms of sexual arousal, it seems that the order is reversed: Physiological cues come first. Moreover, the difference between fear and love is context-dependent; the physiological cues are the same, the difference is in the context – what those physiological cues are attributed to – the bridge or the woman. That is to say, emotion is “inferred” or “reconstructed” from internal senses (interoception), and other context information, rather than introspected.
There is a fair bit of variety in modern theories of emotion, that are variously categorized as “cognitive-mediational”, “constructionist“, “conceptual“, etc. However, virtually all emphasize the importance of interpreted context information over the common-sense notion that we “feel” emotions introspectively. Some current categories of emotion models, called “appraisal“, “self-perception“, or “embodiment” theories, go further to suggest that even interoception is not necessary for emotional content – that is, emotion can be constructed almost entirely from external context such as perception of behaviour, situational factors, information provided by others, and the actions of others, without any “feeling” involved.
In the same year that the Capilano suspension bridge experiment was conducted, James Laird of Clark University in Massachusetts ran another experiment well known in emotion research, in which subjects were prompted to relax or contract various facial muscles in such a way as to surreptitiously induce a smile or a frown, without the subject noticing. The participants’ emotional state was then tested by measuring their reaction to positive, neutral, or negative imagery. Though the imagery was the same, and in some cases contained no implicit emotional content, emotional state varied in accordance with each subject’s induced facial expression: More positive after smiling, more negative after frowning.
Other similar research conducted since supports the notion that in addition to physiological cues, emotional expression, including facial expression, body posture, and behaviour, all contribute to emotional context, and can actually cause, rather than simply be caused by, emotion. Roughly speaking, something (internally or in the environment) causes a reaction that may have physiological, physical, and/or behavioural components, and then a sense-making process is prompted to explain this reaction. Based on those cues, combined with context information (knowledge of the current situation, memory recall, effects of prior learning, etc), an emotion is inferred to account for the reaction. As the author of the facial expression paper put it: “I am frowning (or smiling), and I don’t have any nonemotional reasons for frowning, so I must be angry.” Since emotion is inferred from the reaction, and not from the initial cause, a misattribution may occur, in which the cause of an emotion is inferred incorrectly, and the resulting emotion is inappropriate for the real cause. Though emotions are mental constructs, cognitively deduced from context information, they are nonetheless “real” – they feel genuine, regardless of any misattribution that may occur. In other words, emotion is also subject to the introspection illusion.
Thus, when we describe (categorize or label) ourselves as having a certain emotion, we are basing that attribution on a potentially large variety of context information. But one thing that we do not base that attribution on is our actual cognitive process (to which we have no access), and hence not the actual cause of the emotional reaction. Rather, the description that we provide of the cause of our emotional state is post-rationalized – interpreted or inferred from the contextual situation, just as we do with decision making. As powerful as the introspection illusion is in decision making, its role in emotion is even more impressive: Not only are we fooled to believe that emotions are introspective, but the illusion takes nothing away from the genuine phenomenological “feeling” of emotions. In fact, according to some emotion models, the act of inference (post-rationalizing) itself is the emotion!
Neural Binding
The confusion involved in determining cause and effect in emotion processes is similar to the confusion of cause and effect in the Libet experiment, and in most decision making. In all cases, we assume that privately accessible mental processes precede externally observable effects, while evidence suggests that the reverse is true. Accordingly, in order for the introspection illusion to be effective, our consciousness must be fooled to believe that effect occurs before cause – that conscious awareness precedes action, that emotion precedes physiological symptoms, and that reasoning precedes decision making. If inferences, interpretations, confabulations, reconstructions, and rationalizations, all actually take place after the fact, then how is it that we don’t notice the delay? How are we fooled to believe that it all happens in real-time, fractions of a second before it actually does? The answer to this question is a cognitive mechanism called binding.
Our perception of time can be easily warped – we are all familiar with adages such as “time flies when you’re having fun” or “a watched pot never boils”. Our brain does not have a precise clock mechanism, leaving our “sense of time” to estimate the passage of time from unreliable sources. More telling however, is our false sense of unity of timing. The conscious mind integrates events that happen at different times, and syncs them up to generate a unified experience. For example, if you touch your nose and your toes at the same time, the signal from your toes takes much longer to reach your brain, and yet, you experience the simultaneity just fine. Somehow, the brain hangs on to perceptual information long enough for all signals to arrive, and then integrates them such that we experience a united whole. This syncing process is called “temporal binding”, and is a component of multisensory integration, which is the essence of the unity of our perceptual experience.
It also takes time to process and interpret signals – ie, to identify them as touch, to localize them to the nose and toes, determine their cause, and integrate that with everything else that our brain is processing at the same time. Amazingly, we do not notice the delay that results from these processes – to us it seems as though we perceive and interpret our environment in real-time, even though our conscious experience of it is delayed to allow syncing and processing to occur. While the mechanism that implements binding is poorly understood, it does demonstrate that the exact timing of events eludes us, and can be manipulated by several hundreds of milliseconds without us noticing.
So does our unconscious mind in fact manipulate the timing of events in order to fool our consciousness? In 2002, a team of scientists from the University College London led by professor Patrick Haggard, discovered evidence that indeed it does. Bearing some similarity to the Libet experiment, subjects were asked to attend to and record the timings of certain events with the help of a timing device. In each of several trials, subjects were alternately presented with 3 different pairs of cause-and-effect events to time. The accuracy of subjects’ recorded timings of these paired events was compared to independently measured unpaired timings of the same events to fairly evaluate consistency. In the first pair, subjects were to time the onset of a clicking sound, followed by an audible tone effect. In this baseline scenario, subjects demonstrated great consistency, timing the events within less than 10ms of perceived onset. In the second pair of events, subjects were instructed to perform a voluntary action – pressing a key – which was also followed by a tone. Finally, in the third pairing, participants recorded the timing of an involuntary action induced by TMS – a device placed on the scalp that uses a temporary magnetic field to stimulate the motor cortex to initiate movement. The involuntary twitch was again followed by a tone.
To further ensure that no other factors came into play, the first event in all conditions was accompanied by a clicking sound, all initial (cause) events were followed by a tone (effect) after the same interval, and even the TMS device remained on in all conditions, such that the only difference between the 3 pairs of events was the voluntary, non-voluntary, or no action variable. Thus initially, there doesn’t appear to be any reason why timing accuracy in any of the 3 event pairs should differ from the independently measured unpaired timing accuracies, as was indeed the case in the baseline scenario. Surprisingly however, the results of the other pairs of events did not support these expectations.
Instead, what the researchers found was that when performing a voluntary action, subjects significantly mistime their action to have taken place later than it actually does, while also mistiming the tone onset to have occurred earlier than it does – in other words, the 2 events are judged closer in time to each other than they actually are. Additionally, when performing an involuntary action, subjects significantly mistime both the action and the tone in the opposite directions – judging them to have take place further apart in time than actual. Due to the way that voluntary action appears to warp time perception, the authors labelled this phenomenon “intentional binding”. Numerous replications and extensions of this experiment conducted since have demonstrated that this effect is robust, and not amenable to alternate explanations.
Since only voluntary or non-voluntary cause-and-effect actions distort time perception, and as the intentional binding effect is consistent in its direction, it appears that something about the nature of the action affects conscious experience. That is, the distorted sense of time can be attributed to an unconscious binding process that manipulates our conscious mind’s experience depending on whether a voluntary or non-voluntary action has taken place. Moreover, this effect goes unnoticed by subjects. It appears that our unconscious mind has no trouble manipulating time – even reversing the order of events – without our awareness.
Metacognition
Thus far, we have largely focused on theories and evidence in support of the introspection illusion. To fully understand the extent of its effect however, it is important to consider alternate theories and counter-evidence. As an example, another cognitive function important to decision making is metacognition, which is the ability to know and reason about our mental state. As the preceding discussion implies however, the basis of the introspection illusion is our inability to access mental processes. Counter to the introspection illusion, metacognition suggests that we do have some access to such knowledge. Is this evidence of introspective capabilities?
Numerous metacognitive “senses” or experiences have been proposed, including cognitive dissonance (a sense of conflicting mental processes), surprise (a sense of difficulty of mental processing), and tip-of-the-tongue (a sense of pending mental processes). Even if any of these experiences represent actual introspection, having just a “sense” of cognitive functioning provides only very basic information about mental state – certainly not sufficient to account for the elaborate post-rationalized descriptions that we often provide of our inner world. Nonetheless, it would be interesting to explore how people determine the answers to metacognitive questions such as: “How familiar is this to you?” “What is your confidence level?” or “How good is your knowledge about this subject?” – with or without introspection.
A prominent example of metacognitive senses is processing fluency, which is a sense of ease of mental processing, sometimes called “cognitive ease”. Researchers can manipulate how easy or difficult something is to process using a variety of methods, such as repetition, priming, demand, readability, and positive associations. Fluency has been shown to be the metacognitive sense behind judgements of familiarity, truth, confidence, and positive affect. That is, the easier something is to process – for example using high contrast text – the more likely it is to be judged familiar, true, or positive, and the greater the confidence level with which this judgement is made. That such judgements can be so easily manipulated shows how basic this metacognitive sense is. Nonetheless, the question remains: How do subjects determine fluency? Do they have access to information about mental processes to know how easily they are performed, or do they infer this information from some external source as with decision making?
In 2008, a team of researchers lead by Ralph Hertwig of the University of Basel in Switzerland, sought to find out. In a series of experiments, participants were asked to report their recognition of various items as quickly as they could, and then to report how quickly they determined their answers relative to each other. For example, in one trial, a subject is asked to report as quickly as they can, whether or not they recognize the names of 2 cities in the US. They are then asked which of the 2 questions they think they responded to quicker. In another trial, a subject is asked to choose the city that they think is largest. They are then asked which of the 2 cities they actually recognized.
The results showed that first, for high-frequency items, recognition rates were commensurate with frequency of mention in the media – that is, people are more likely to report familiarity of well known cities. Second, the timing of answers (response latency) was also commensurate with actual frequency – that is, subjects respond faster on well known cities than less common ones. Third, subjects were able to accurately report which answer came quicker with great accuracy – that is, they can discriminate between response times as close as 100ms apart. Fourth, subjects typically chose the item that they recognized faster – that is, people guess that the cities they recognize more quickly are probably the larger ones. Finally, and most importantly, subjects were more accurate in choosing the faster response the greater the difference in response times – that is, when response times differ by less than 100ms, the decision is less likely to be based on familiarity.
The research team concluded that subjects’ discrimination of response latency is both necessary and sufficient to judge fluency, which in turn is a good indicator of actual frequency in the media. Notably, response latency does not require any introspective knowledge of the mental process used to determine familiarity – response time alone is sufficient, and can be determined accurately enough to make a judgement of fluency. Put another way, when we think of an answer to a question, our brain provides us with two bits of information: The answer to the question, and how long it took to come up with that answer. This timing information – the response latency – provides us with metacognitive information about how easy it was to come up with the answer, that requires no introspection to obtain.
Fluency is thought to account for (or be the same as) several other metacognitive senses, including the availability heuristic, recognition, surprise, coherence, and retrieval heuristics, all of which rely on ease (or difficulty) of processing to inform judgements. As such, the metainformation provided by latency alone – information that requires no introspective capability – can account for much of the phenomenon of metacognition. It informs us of confidence level (faster = more confident), familiarity (faster = more familiar), truth (faster = more likely to be true), and even affect (faster = more likely to be positive).
Metacognition is often described as a skill that we develop, rather than a sense that is built-in – perhaps we simply learn to interpret our behaviour over time. This view is compatible with evidence that metacognition develops in young adulthood. A learned skill suggests that metacognition is less likely to be based on a built-in mechanism, and more likely to be based on self-perception. Thus, metacognition may be yet another example of the introspection illusion, rather than a counterexample.
Self-reflection
As with other varieties of introspection reviewed earlier, there are numerous examples of metacognition failing, suggesting again, that it is not an example of introspection. For example, if confidence ratings were based on accurate introspective access, then we would not expect them to be affected by external manipulations – they would be based on the subject’s knowledge of their own mental capabilities alone. However, confidence ratings are known to be susceptible to various biases, including the illusion of explanatory depth, Dunning-Kruger efffect, the illusion of knowledge, and illusory superiority. As well, non-introspective explanations for many metacognitive senses have been proposed, such as an alternative to cognitive dissonance called self-perception theory. This theory has held up to decades of scientific scrutiny, and performs equally well, but unlike cognitive dissonance – that relies on an introspective sense of mental state – self-perception theory is compatible with the view that metacognition is illusory, and suggests that it is actually inferred from behaviour external to mental process.
However, research into different types of metacognition (including cognitive dissonance), and intuition, a related cognitive function that also relies on cognitive fluency, has yielded results that complicate matters. Some intuitive judgements appear to be informed by affect rather than latency (presumably latency causes affect but then only affect is used for judgements), suggesting that interoception (internal sense) may convey metacognitive information. Research thus far points to relatively simple, externally accessible, and in the case of affect, externally manipulable communication mechanisms between unconscious processes and our conscious mind. Though affect usually only adds valence (good or bad) and arousal/intensity information, it does remind us that conscious awareness may have access to metacognitive knowledge through more than one channel, potentially offering a much richer variety of communication than previously alluded to.
Furthermore, while there may be a strong suggestion that non-introspective inference processes are involved in metacognition, intuition, and other kinds of private self-knowledge, none of this evidence completely discounts a role for introspection. Before giving up entirely on metacognition as introspective knowledge, we should consider the possibility that non-introspective explanations of self-knowledge may complement, rather than supplant, true introspection. Perhaps at least part of the self-knowledge that we develop as we enter adulthood can be attributed to insight rather than inference. If we really do develop genuine introspection through experience and learning, then one way to demonstrate its potential is by investigating ways that it can be improved.
It was long thought that self-focus, meditation, concentration, hypnosis, self-reflection, psychoactive chemicals, and similar techniques can help improve one’s insight into their brain’s functioning, mental state, decision-making process, cause of emotions, buried memories, and other such private information. For several decades, researchers have attempted to uncover evidence of this intuitive idea that paying attention to our mental state should improve introspective accuracy. Informally termed the “perceptual accuracy hypothesis” by authors Paul Silvia and Guido Gendolla, their 2001 review of over 30 years of research concludes that evidence to support this hypothesis remains elusive. Any apparent insight improvements made by subjects are better explained by an increase in motivation to improve consistency of reports with self-perception. That is, self-reflection only improves the verisimilitude of confabulated reports – how real they sound rather than how real they are. Another review by Wilson and Dunn, published in 2003, concludes the same regarding the potential for improvement of self-knowledge using introspection. A more recent review from 2011 by Bollich, Johannet, and Vazire comes to essentially the same verdict – the authors conclude: “The road to self-knowledge likely cannot be traveled alone, but must be traveled with close others.”
It has also been suggested (eg, Peter White, 1988) that perhaps we do have reliable introspective capabilities, but what is unreliable are the verbal reports that subjects provide. However, this possibility cannot account for research reviewed that does not collect introspective reports directly, but deduces self-knowledge some other way. Furthermore, the unreliability of verbal reports would be equally disturbing to actual lack of insight, as it still affects any research that relies on such reports, our interpersonal relationships, and any social interaction in which we expect others to accurately report on their self-knowledge.
In reality, no amount of failure to demonstrate it can completely eliminate the role of introspection in metacognition, intuition, or any other kind of private self-knowledge. However, much evidence serves to minimize its role, if any, in many circumstances. This review is not intended to disparage further research into introspection, but simply to remind us that we should not take it for granted – that in many important cases, it fails us.