“Psychology of Intelligence Analysis” Notes

Psychology of Intelligence Analysis by Richards J. Heuer, Jr. is a useful resource for improving your intelligence analysis skills through better thinking and self-aware combating of cognitive biases. The book is a collection of articles originally used in the CIA Directorate of Intelligence.

I read this because I’ve seen it recommended for cyber threat intelligence analysts.

You can download the free PDF, or purchase it in paperback, hardcover, or audiobook.

This page contains one or more affiliate links. As an Amazon Associate, I earn from qualifying purchases.

My notes follow.

Psychology of Intelligence Analysis by Richards J. Heuer, Jr.

Foreword

… information and expertise are a necessary but not sufficient means of making intelligence analysis the special product that it needs to be. A comparable effort has to be devoted to the science of analysis. This effort has to start with a clear understanding of the inherent strengths and weaknesses of the primary analytic mechanism-the human mind-and the way it processes information.

Dick Heuer makes clear that the pitfalls the human mental process sets for analysts cannot be eliminated; they are part of us. What can be done is to train people how to look for and recognize these mental obstacles, and how to develop procedures designed to offset them.

Introduction

Intelligence analysts should be self-conscious about their reasoning processes. They should think about how they make judgments and reach conclusions, not just about the judgments and conclusions themselves.

Contributors to quality of analysis: Sherman Kent, Robert “Bob” Gates, Douglas MacEachin, Richards “Dick” Heuer.

Don’t reject the possibility of deception because you don’t see evidence of it; you won’t see evidence of properly-executed deception.

Perception: Why Can’t We See What Is There To Be Seen?

“Initial exposure to blurred or ambiguous stimuli interferes with accurate perception even after more and better information becomes available.”

Memory: How Do We Remember What We Know?

“Hardening of the categories”: If people don’t have an appropriate category for something. they’re unlikely to perceive it or be able to remember it later. If categories are drawn incorrectly, people are likely to perceive and remember things inaccurately.

Evidence is diagnostic when it influences an analyst’s judgment on the relative likelihood of various hypotheses. If an item seems inconsistent with all hypotheses, it may have no diagnostic value. Without a complete set of hypotheses, it’s impossible to evaluate the “diagnosticity” of the evidence.

A hypothesis can’t be proved even by a large body of evidence consistent with it, because that same body of evidence may be consistent with other hypotheses. A hypothesis can be disproved by a single item of evidence that’s incompatible with it.

Do You Really Need More Information?

Once an experienced analyst has the minimum info necessary to make an informed judgment, additional info generally doesn’t improve the accuracy of estimates. However, additional info leads the analyst to become more confident in the judgment (to the point of overconfidence).

Keeping an Open Mind

Questioning Assumptions: see how sensitive the judgment is to changes in the major variables; try to disprove assumptions; get alternative interpretations from those who disagree with you; don’t assume the other side thinks the same way you do (mirror-imaging).

Seeing Different Perspectives: imagine yourself in the future, explaining how the event could’ve happened; explain how your assumptions could be wrong; mentally put yourself in someone else’s place; find a “devil’s advocate” to critique your views.

  • Deferred Judgment: generate all ideas first, then evaluate them
  • Quantity Leads to Quality: quantity of ideas eventually leads to quality; 1st ideas are usually most common or usual
  • No Self-Imposed Constraints: generate ideas without self-imposed constraints
  • Cross-Fertilization of Ideas: combine ideas and interact with other analysts

Structuring Analytical Problems

  1. List attributes you want to maximize
  2. Quantify relative importance of each attribute, to add up to 100%
  3. For each option you’re considering, rate it on each attribute
  4. Calculate which option best fits your preferences

Analysis of Competing Hypotheses

Analysis of competing hypotheses (ACH) requires an analyst to explicitly identify all the reasonable alternatives and have them compete against each other for the analyst’s favor, rather than evaluating their plausibility one at a time.

  1. Identify the possible hypotheses to be considered. Use a group of analysts with different perspectives to brainstorm the possibilities.
  2. Make a list of significant evidence and arguments for and against each hypothesis.
  3. Prepare a matrix with hypotheses across the top and evidence down the side. Analyze the “diagnosticity” of the evidence and arguments- that is, identify which items are most helpful in judging the relative likelihood of the hypotheses.
  4. Refine the matrix. Reconsider the hypotheses and delete evidence and arguments that have no diagnostic value.
  5. Draw tentative conclusions about the relative likelihood of each hypothesis. Proceed by trying to disprove the hypotheses rather than prove them.
  6. Analyze how sensitive your conclusion is to a few critical items of evidence. Consider the consequences for your analysis if that evidence were wrong, misleading, or subject to a different interpretation.
  7. Report conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one.
  8. Identify milestones for future observation that may indicate events are taking a different course than expected.

An unproven hypothesis has no evidence that it’s correct. A disproved hypothesis has positive evidence that it’s wrong.

When you’re tempted to write, “There’s no evidence that … ,” ask yourself, “If this hypothesis is true, can I realistically expect to see evidence of it?”

This procedure leads you through a rational, systematic process that avoids some common analytical pitfalls. It in- creases the odds of getting the right answer, and it leaves an audit trail showing the evidence used in your analysis and how this evidence was interpreted. If others disagree with your judgment, the matrix can be used to highlight the precise area of disagreement. Subsequent discussion can then focus productively on the ultimate source of the differences.

What Are Cognitive Biases?

Cognitive biases are mental errors caused by subconscious mental procedures for processing info. They’re not caused by emotional or intellectual predisposition toward a certain judgment, unlike cultural bias, organizational bias, or bias from one’s self-interest.

Biases in Evaluation of Evidence

The Vividness Criterion: Give little weight to anecdotes and personal case histories, unless they’re known to be typical. Give them no weight if aggregate data based on a more valid sample is available.

Biases in Perception of Cause and Effect

People overestimate the extent to which other countries are pursuing a coherent, coordinated, rational plan, and thus also overestimate their own ability to predict future events in those nations. People also tend to assume that causes are similar to their effects, in the sense that important or large effects must have large causes.

When inferring the causes of behavior, too much weight is accorded to personal qualities and dispositions of the actor and not enough to situational determinants of the actor’s behavior. People also overestimate their own importance as both a cause and a target of the behavior of others. Finally, people often perceive relationships that do not in fact exist, because they do not have an intuitive understanding of the kinds and amount of information needed to prove a relationship.

Bias in Favor of Causal Explanations: Random events often look patterned.

Bias Favoring Perception of Centralized Direction: A country’s inconsistent policies may be the result of weak leadership, vacillation, or bargaining among bureaucratic or political interests, rather than duplicity or Machiavellian maneuvers.

Similarity of Cause and Effect: Major effects may be the result of mistakes, accidents, or aberrant behavior of an individual, rather than major causes.

  • Don’t overestimate the effect of a person’s or government’s internal personality or disposition on their behavior, and don’t underestimate the effect of their response to external situational constraints.
  • Don’t overestimate the effect of your response to your situation on your behavior, and don’t underestimate the effect of your personality.

Overestimating Our Own Importance: Don’t overestimate the likelihood that actions that hurt you were intentionally directed at you, and don’t underestimate the likelihood that those actions were the unintended consequences of decisions not related to you.

  • To determine a causal relationship, you must build a 2 x 2 contingency table that shows a strong relationship between factors A, B, Not A, and Not B.
  • There’s not enough data to say there’s a relationship between deception and high-stakes situations.

Biases in Estimating Probabilities

Anchoring: The final estimate lands close to the initial estimate. To combat it, consciously avoid using prior judgments as a starting point, or use formal statistical procedures.

Expression of Uncertainty: After vague expressions (“possible,” “probable,” “unlikely,” “may,” “could,” etc.), put the estimated odds or percentage range in parentheses.

Assessing Probability of a Scenario: multiply the probabilities of each individual event.

Hindsight Biases in Evaluation of Intelligence Reporting

Hindsight biases: Analysts normally overestimate the accuracy of their past judgments. Postmortems normally judge that events were more foreseeable than they were.

To overcome hindsight biases, remind yourself of the uncertainty prior to a situation by asking yourself, “If the opposite outcome had occurred, would I have been surprised? If this report had told me the opposite, would I have believed it? If the opposite outcome had occurred, would it have been predictable given the info available at the time?”

Improving Intelligence Analysis

  1. Defining the problem: be sure to ask the right questions
  2. Generating hypotheses: identify all plausible hypotheses, then reduce them to a workable number of reasonable hypotheses
  3. Collecting information: collect info to evaluate all reasonable hypotheses
  4. Evaluating hypotheses: look for evidence to disprove hypotheses; consider using ACH
  5. Selecting the most likely hypothesis: choose the hypothesis with the least evidence against it; list other hypotheses and why they were rejected
  6. Ongoing monitoring of new information: specify criteria that would require reevaluation of hypotheses

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Chad Warner

Seeking a cyber threat intelligence (CTI) or OSINT job. I'm a CTI, OSINT, & cybersecurity enthusiast; bookworm; and fan of Tolkien & LEGO.