Essays at the intersection of marketing and life.


The Way You Think Can Save Your Life

Almost ten years ago, I returned to full-time education for two years, completing a post-grad in psychology. In celebration of this enriching experience, I am sharing some of my favourite essays on themes of psychology relevant to today’s culture. Each represents over 40 hours of reading, thinking and writing (At least, that’s what my gut tells me). The essay below had the original snappy title ‘Discuss the extent to which cognitive heuristics can be explained by research into the claim that there are two systems of rational thinking.’ 


On the 26th December 2004, ten-year-old Tilly Smith was walking the beach with her mother in Thailand when she noticed a rapid withdrawal of seawater from the shore, accompanied by gurgling froth. Tilly instantly concluded that a tsunami was about to occur. She pleaded with her Mum to leave, but her mother could see no reason to panic, and stayed. Tilly, compelled to act but incapable of a convincing explanation, ran from the beach, screaming (United Nations International Strategy for Disaster Reduction Secretariat, 2005).


Anyone who has ever been confronted with a dessert trolley and dithered knows intuitively that the brain is capable of delivering two opposing solutions to a perceived scenario. One can be between two minds about many things (Darlow & Sloman, 2010).  This essay sketches the context in which rational thinking has emerged, and explains the role of heuristics in human decision making, with appropriate evidence. It sketches the tension in the literature between logic and intuition, and explains how theoretical advances have helped explain why both have an adaptive role to play. Some brief commentary on the evidence of neural imaging in how humans think is offered. Finally, some conclusions are drawn with regard to the dual-system approach and the role of heuristics within this framework. The case of Tilly Smith is revisited in this context.



Aristotle advanced the enduring idea of two kinds of knowledge processes: demonstrative proof which reveals irrefutable fact, and probable reasoning which delivers a less certain result (Gigerenzer & Todd, 1999). Exploration of these processes, and how they may be manipulated both optimally and maximally, has been in ebb and flow through the centuries. Like many other phenomena in psychology, the theory of thinking has tended to reflect its political landscape e.g. when the Reformation challenged certainty, this encouraged the development of probability to redefine uncertainty (Gigerenzer & Todd, 1999). Later, Bernoulli’s Expected Utility Theory – in the presence of risky outcomes, higher expected value outcomes are preferred – explained the role of the rational actor, and drives much of economics to this day, with the support of Bayesian logic (Hardman, 2009).Western culture has embraced these modes of rational thinking, generally agreeing that a normative system of ‘correct reasoning’ is one based on logic and probability (Evans & Over, 1996).


  1. i) Context

The actual way people think has long posed a conundrum to psychologists: humans are demonstrably the most successful decision-makers in the history of the planet, and yet there is abundant prima facieevidence of a lack of logical thinking in their everyday decisions (Hardman, 2009). This paradox was described by Simon (1957) as the consequence of ‘bounded rationality’ – the idea that people are as rational as their context and processing limitations permit. Simon coined ‘satisficing’ – that mixture of satisfying and sufficing – to explain this phenomenon, and critiqued economic theory as lacking the elements of perception and cognition that explain human behaviour.


This tension addressed by Simon is certainly not concluded. Baron (2000) almost becomes a proselytiser for rational thinking in describing what it is: the thinking we would all rather if our best interests were made clear, in order to achieve our goals. Such an assertion, albeit a tad arrogant, is supported by many. Karder (2000) declared that being moreanalytic and lessintuitive would likely help in developing more rewarding and effective solutions. Rationality, it would appear, is still held in good favour: ‘it is the desired state for good decisions’ (Eysenck & Keane, 2010 p. 562). In short, a literature review reveals that there has long been – and in some cases still is – an illogical elephant in the room. It took two scholars originally from The Hebrew University in Jerusalem to name that ‘elephant’, and shift the debate.


  1. ii) Exploding Bernoulli

Kahneman and Tversky (1973) exposed the frailties of Bernoulli’s rational man in a series of experimental and theoretical work which continues to exert great influence today. Kahneman explains the connecting thought across all of their work in simple terms: when thinking, human beings often jump to conclusions by answering a question that is easier than the one asked (Kahneman, 2003). The world of heuristics and biases became their focus and was the impetus for clarifying theoretical developments which ensued.


iii) Focus on heuristics – mental rules of thumb

Heuristics are mental shortcuts which allow people to solve problems and make judgments quickly and efficiently (Eysenck & Keane, 2010). They are fast and frugal processes which involve rapid processing of incomplete information. Heuristics are characterised by an implicit process and an explicit outcome. Indeed, heuristics are thought by some to be the dominant method by which humans reason (Gigerenzer, 2007). Gladwell’s Blink (2005) popularised heuristics with its evocative sub-title: thepower of thinking without thinking.



Kahneman (2002) characterises the mind as a system of jumps to conclusions, facilitated by heuristics. Much of his scholarship has been in the exposure of the biases, fallacies and errors which can attend their usage (Tversky & Kahneman, 1983; Kahneman, 2003). In the last 35 years or so, much experimental work has helped to define the territory of heuristics.


The availability heuristic: this involves over-estimating the frequencies of events on the basis of how easy or difficult they are to recall from long term memory. Lichtenstein, Slovic, Fischoff, Layman & Coombs (1978) instructed 660 adults to judge the frequency of death from various causes and found a consistent and systematic bias in their responses. The authors conclude that this was due to over-estimating reasons which attracted more publicity (e.g. murder) over those patently more likely to occur (e.g. suicide), due to the availability heuristic. What is most interesting in this experiment was its kick in the tail: having been told to avoid such systematic biases, participants simply could not, and continued in their over-estimations. This demonstrates the deep-rooted belief of humans in gut judgement, irrespective of ‘the facts’.


The representative heuristic: this is the assumption that typical members of a category are encountered most frequently (Kahneman and Tversky 1973). This heuristic is demonstrated by use of the most cited experiment in the canon of heuristics and biases – the ‘Linda Problem’. Linda was presented as a 31 yr old, outspoken, smart woman with social concerns, who was an activist. 173 participants had to decide (among other options) which was more probable – that Linda was a bank teller, or a feminist bank teller. 89% favoured the latter, even though mathematically this must be less likely. The representative heuristic is exposed in that participants favoured the scenario most typical rather than most likely. Further, the result displays exposes the conjunction fallacy: the mistaken belief that the occurrence of a combination of events is more likely than one of these events on its own. In short, a mathematical impossibility is embraced by participants (Eysenck & Keane, 2010).


The recognition heuristic: this describes the tendency to favour the option that is most recognised. It is human nature to decide to ‘take the best’ based on this rule of thumb.  Borges, Goldstein, Ortmann, & Gigerenzer (1999) demonstrated the ability of the recognition heuristic to beat the experts in a provocative paper entitled ‘Can ignorance beat the stock market?’ 480 novices chose where to invest in foreign markets based solely on the foreign companies that they recognised. In depending on this ‘go with the best’ heuristic, the participants beat expert fund managers who had a wealth of information at their disposal to aid their decisions.



“I never came upon any of my discoveries through the process of rational thinking”Albert Einstein (Isaacson, 2007)


The present author detects a subtext of condescension from many quarters with regard to the usage of heuristics. Kahneman, despite much protesting, has built his career on explaining the ‘bounds of rationality’, and has continually quoted examples of errors, biases and fallacies. The judgemental valence of his vocabulary is noted. Eysenck and Keane do not hold back in their surprise at the pervasiveness of heuristics, which is “puzzling given that most heuristics can lead to errors” (Eysenck & Keane, 2010, p.511). Stanovich & West go further, pointing out that people of higher IQ are less likely to employ heuristics and less likely to fall ‘victim’ to its many fallacies (Stanovich & West, 2000).


Gigerenzer leads the discourse in defence of heuristics. He points out that the meaning of heuristics – ‘serving to find out’ – has almost been inverted in modern usage, where heuristics often stand for weak or imperfect thinking (Goldstein & Gigerenzer, 2002). His assertion is simple: to make good decisions, we must either ignore information or act on incomplete information (Gigerenzer, 2007). Judgements in the real world are not subject to mathematical assessment, and for good reason: rigid knowledge ignores the ambiguity of life. As a consequence, our natural language and behaviour does not follow the laws of logic. Hence, in the ‘Linda Problem’, for example, although participants’ responses may have been logically incorrect, they were likely the most relevant in the situation evoked. This tension between truth and logic is an ongoing issue, and one that was rightly addressed by the theorists.


  1. i) Introduction

Evans & Over (1996) made an important contribution in distinguishing between different kinds of rationality. Rationality-1 is thinking led by personal behaviour seeking optimal solutions; Rationality-2 is thinking which is ‘impersonal’, objective and more maximal in nature. This had the effect of making the debate holistic, implicitly accepting that there is no one single ‘right’ decision in many instances. Bernoulli’s hegemony was coming to a close (Kahneman, 2003). These theorists’ work was a distillation of centuries of thinking in the field, using empirical evidence to build the case that tacit, parallel processes of thinking should sit alongside explicit and deliberative processes (Evans & Over, 1996).


  1. ii) Theoretical clarity – SYSTEM 1 AND SYSTEM 2 thinking

System 1 and System 2 was devised by Stanovich & West (2000). Its generic and non-emotional nomenclature has perhaps helped in its broad acceptance. Its clarity of purpose is compelling.


System 1describes a thinking system that is intuitive, effortless, associative, implicit and spontaneous (Kahneman, 2003). It channels latent intelligence and knowledge without cognitive load, capable of operating on multiple levels, simultaneously. This thinking simply happens, below awareness (Eysenck & Keane, 2010). Neuroscientists posit that System 1 processes reside in ancient parts of the brain (basal ganglia; lateral temporal cortex), implying a primitive and adaptive mechanism (Lieberman, 2003). System 1 thinking is characterised by an emotional valence: it has within it a ‘beating heart’ of sorts, which sets it apart. Powering this system is the mechanism of heuristics.


System 2describes a thinking system that is effortful, analytic, rule-governed, serial and controlled. Akin to the archetypal ‘thinking man’ much lauded in Western culture, this system employs conscious intelligence in methodical fashion, mostly engaging working memory to ‘calculate’ a response (Hardman, 2008). Awarenessis the hallmark of this thinking process and it is readily identified with Bernoulli’s theory of maximum utility (Kahneman, 2003), and normative Bayesian logic.


Evans (2006) developed a heuristic-analytic theory of reasoning which attempted to advance the dual-system construct. His breakthrough is in seeing heuristics (System 1) as the first port of call in human thinking, over-ridden by System 2 only when deemed appropriate (a decision which largely depended on the context and the intelligence of the individual actor). Kahneman (2003), whilst agreeing with Evans, points out that this control mechanism is inconsistent in its application and effect. Such a serial view of thinking has some face validity, and is coherent with the ‘cognitive miser’ approach much admired in cognitive psychology (Eysenck & Keane, 2010).



Kahneman and Frederick (2002) caution against accepting too simplistic a relationship between Systems 1 & 2 and heuristics / logic. Heuristics, they point out, can be rule-based and deliberative even though they are simpler, faster, and less resource-intensive than the rational calculation. Assigning heuristics to intuition and rational decision making to logic would also imply that deliberation always makes better decisions. (Darlow and Sloman, 2010)


At the juncture of System 1 and System 2 thinking is the belief bias. We are often confronted with things that are logically valid but simply unbelievable. (This quandary is also the inspiration of much consumer marketing and almost all of Country & Western repertoire). In effect, the belief bias reveals a confrontation between emotion and rationale, implicit and explicit thinking, intuition and logic. It can also be accounted for by the balancing of memory – short term (working memory, sharp but volatile) and long term (blunted, but resilient). It is too simplistic to see heuristic reasoning and analytic reasoning as two opposing processes – one implicit and the other explicit. Osman (2004) suggests as 2 x 2 matrix: implicit and explicit heuristic processing, and implicit and explicit analytic processing. This approach, more nuanced and ‘real world’ in its conception, is likely the focus of future work in the field.


Although there is emerging agreement in the literature regarding the behavioural processes that underpin the two systems of rational thinking, such agreement is absent regarding their neural basis. Lieberman (2003) offers the most composite view in proposing his Reflective – Reflexive theory of social judgement, positing that intuitive decisions engage the basal ganglia and lateral temporal cortex. Deliberative decision-making fires the areas of the anterior cingulate and prefrontal cortex – and even the temporal lobe. These findings are in line with Goel and Dolan’s (2003) work using syllogisms whereby the prefrontal cortex is activated when correct System 1 thinking is invoked. Some commentators note that neuro-imaging cannot identify location at specific times in the thinking process. Thus, its conclusions are correlational rather than causal (Darlow & Sloman, 2010). The work continues.



The System 1 & 2 distinction and debate lies on a current fault-line of popular culture: is it best to be logical (think Michael O’Leary and Ryanair) or intuitive (think Oprah, and Self-help books)? System 1 & 2’s integrative structure helps explain why this is a fallacious choice. As ambiguity is the rule of life, it would be impossible to operate in a purely logical manner. Intuition is the steering wheel through life. Intelligence is often at work without conscious thought (Gigerenzer, 2007). Logic too has its clear place in optimal decision making.

On reflection, the author sees Systems 1 & 2 as adaptive thinking systems which exist symbiotically and work together in the best interests of the individual. Heuristics are the mechanism by which intuition, a critical expression of implicit intelligence, expresses itself – and the Systems Theory accords it its place. The idea that the emotional is required for the rational to function is a very active field of psychological enquiry in neuroscience, led by Demasio (1995). Perhaps the issue is not how to switch from System 1 to System 2, but rather how to make heuristic thinking even more accurate. The author’s gut judgement suggests this is a focus for future research.



Tilly Smith warned authorities of the approaching tsunami in time to clear the beach. Her actions on the day of the Asian tsunami saved 100 lives. The ten-year-old’s intuitive thinking was powered by a previous school lesson on the danger signs of natural disasters. Her mother, caught in the headlights of logic, had refused to quit the beach. Penny Smith’s explanation of what she eventually decided to do is instructive. Perhaps unusual by Kahneman’s standards but less so for Gigerenzer, she essentially described System 1 over-riding System 2:  “I didn’t know what a tsunami was, but seeing your daughter so frightened made you think something serious must be going on”.She followed her gut, moved to higher ground, and saved her own life(National Geographic, 2005).





  • Borges, B., Goldstein, D. G., Ortmann, A., & Gigerenzer, G. (1999). Can ignorance beat the stock market? In G. Gigerenzer, P. M. Todd, & the ABC Research Group, Simple heuristics that make us smart(pp. 59-72). New York: Oxford University Press


  • Darlow, A. & Sloman, S. (2010). Two systems of reasoning: architecture and relation to emotion. Cognitive Science, Vol 1 (print pending)


  • Demasio, A. (1995). Descartes’ Error: emotion, reason and the human brain.London: Harper Perennial


  • Evans, J. St. B. T. & Over, D. E. (1996). Rationality and reasoning. Hove: Psychology Press


  • Evans, J. St. B. T. (2006). The heuristic-analytic theory of reasoning: extension and evaluation. Psychonomic Bulletin & Review, 13 (4), 378-395


  • Eysenck, M. & Keane, M. (2010). Cognitive Psychology – a student’s handbook, 6th edition. New York: Psychology Press


  • Gigerenzer, G. (2007). Gut feelings: the intelligence of the unconscious. London: Viking


  • Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109, 75-90.


  • Gigerenzer, G., & Todd, P. (1999). Simple heuristics make us smart. London: Oxford Press


  • Gladwell, M. (2005). Blink – the power of thinking without thinking. London: Little, Brown & Co.


  • Hardman, D. (2009). Judgement and decision making – psychological perspectives. Chicester: BPS Blackwell


  • Isaacson, W. (2007): Einstein: His Life and Universe. New York: Simon & Schuster


  • Kahneman, D. (2003) A Perspective on Judgment and Choice: mapping bounded rationality, American Psychologist, 58 (9), 697–720



  • Kahneman D, Frederick S. Representativeness revisited: Attribute substitution in intuitive judgment. In: Gilovich T, Griffin D, Kahneman D, eds. Heuristics and Biases: The Psychology of Intuitive Judgment.New York: Cambridge University Press; 2002, 49–81


  • Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237–251


  • Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M. &  and Combs, B. (1978) Judged Frequency of Lethal Events. Journal of Experimental Psychology: Human Learning and Memory, (4)6, 552-578


  • Lieberman, M. D. (2003). Reflective and Reflexive Judgment Processes: A Social Cognitive Neuroscience Approach . J. P. Forgas, K. R. Williams, & W. von Hippel (Eds.), Social judgments: Implicit and explicit processes(pp. 44-67). New York: Cambridge University Press



  • Osman, M. (2004). An evaluation of dual-process theories of reasoning. Psychonomic Bulletin & Review, 11, 988-1010


  • Simon, H. A. (1956). Rational choice and the structure of environments. Psychological Review, 63, 129–138


  • Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioural and Brain Sciences, 23, 645-726


  • Tversky, A., & Kahneman, D. (1983). Extensional vs. intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90, 293–315


Leave a Reply

« Back to blog