IN RECENT WEEKS, intelligence has dominated the headlines. On February 26th American press reports described new intelligence suggesting that the covid-19 virus had escaped from a Chinese laboratory. On March 1st, American intelligence agencies published a report saying that Havana syndrome, a strange pattern of apparent brain injuries among American spies and diplomats, was not thought to be the result of enemy action. And on March 7th media outlets in America and Europe suggested that explosions at the Nord Stream gas pipelines in September might have been the work of a pro-Ukrainian group. How should these claims be judged?
Intelligence normally evokes espionage. But intelligence collection—a report from a human agent or an intercepted message—is just one half of the business. Once spies have intelligence, they need to make sense of it. In countries with good intelligence systems, that involves assessment. The most robust kind involves “all-source” assessment, which pulls together secret intelligence from human and technical intelligence services with information from diplomatic telegrams and open sources like the media.
Intelligence analysts piece together information to establish facts (does Iran have enough fissile material for a nuclear bomb?) or make a prediction (will it attempt to make one?). But a core part of their job is also to convey how uncertain they are about those assessments. One tool they use is “estimative language”. Take a recent tweet on intelligence from Britain’s Ministry of Defence. There was a “realistic possibility”, it said, that a Russian unit was being equipped with old age T-62 tanks, and it was “highly likely” that upgraded sights would help with fighting at night.
These terms were not used casually. In 1964 Sherman Kent, a CIA analyst, coined the phrase “words of estimative probability”. His concern was that commonly used terms (President Vladimir Putin “may well” use nuclear weapons) meant different things to different people. His solution was to ensure everyone used specific words to mean specific probabilities.
So according to Britain’s “probability yardstick”, borrowed from American practice, a “realistic possibility” corresponds to a 40-50% probability. “Highly likely” means 80-90%. The point is to make sure that everyone is on the same page. It is also emphasized that intelligence is always uncertain. (One joke among British intelligence insiders was that the government’s Joint Intelligence Committee would insist that it is “almost certain” the sun will rise in the east tomorrow.)
A second tool is “analytic confidence”. On February 26th the Wall Street Journal broke the news that America’s Director of National Intelligence had updated a report on the origins of covid-19 with a note that the Department of Energy—which has expertise on biological threats—now believed a lab leak was the most likely cause of the pandemic. Crucially, the WSJ headline omitted that this finding was held with “low confidence”. That, too, has a specific meaning.
Analysts usually attach a confidence level to every snippet of intelligence. Is the source reliable, or does he or she tend to lie? Is he or she in a position to know? They also attach a confidence level to their overall assessment—whether it is likely or not. American intelligence agencies define information as low-confidence if its credibility is “questionable”, it is “poorly corroborated” or there are “significant concerns” with the source. A tap on Mr Putin’s phone is probably reliable; hearsay picked up at a Moscow cocktail party is not.
Flimsy intelligence is more common than people think. “Most reporting is low-confidence,” says one person with extensive experience of assessment. That means that “we can’t really get to the bottom of whether the person who’s provided this information is in a position to know it.” Somewhat paradoxically, an agglomeration of low-confidence reports can cohere into a high-confidence assessment. That is often the case in counter-terrorism assessments, which require swift action on the basis of fragmentary knowledge.
But to offer a striking claim without any confidence level is suspect. In October 2002 American intelligence analysts wrote a classified assessment of Iraq’s purported chemical and biological weapons, which was later used as justification for America and Britain to invade. When America published an unclassified version for public consumption that same month, it omitted a vital detail: that the spooks had low confidence in their ability to judge whether Iraq’s dictator, Saddam Hussein, would use those weapons or share them with terrorists—two things the administration was playing up publicly.
On the face of it, intelligence around the sabotage of the two Nord Stream gas pipelines, which connect Russia to Germany, looks thin. A review of intelligence described by the New York Times “suggests” the perpetrators were opponents of Mr Putin “but does not specify the members of the group, or who directed or paid for the operation.” American officials cited in the piece decline to describe the “strength of the evidence”, and say “there are no firm conclusions”. Though German news outlets have described alleged details of the plot, this sounds like a low-confidence judgment—one that should be taken with a sizeable pinch of salt.
Conversely, in early February 2022, British officials said it was “highly likely” that Russia would invade Ukraine and that they had “high confidence” the Kremlin was engineering a pretext to do so. Those claims reflected the clear-cut nature of the intelligence that had accumulated since the previous autumn. Even so, that crisis reveals a dilemma. Had Mr Putin abandoned his plans after they were exposed, the intelligence might have appeared to be wrong. Rather like Heisenberg’s uncertainty principle in physics, assessments that predict the future can also shape it.
Intelligence is also fiercely contested. In February 2022 France and Germany disbelieved American and British claims on Ukraine, despite seeing intelligence. Internal rows can be even fiercer. America’s intelligence community, commonly known as the ICis made up of 18 different organisations, from the CIA to the Space Force. The director of national intelligence is supposed to produce national intelligence estimates (NIEs) that reflects a collective judgment. But dissent is common.
The NIE on Havana syndrome published on March 1st, notes wildly varying confidence levels. Two agencies have high confidence in the claim that the symptoms were probably caused by natural or environmental factors, rather than a Russian weapon. Two others have only low confidence. The same divisions are apparent over the origins of covid-19. In an assessment by nine agencies, most think the virus emerged naturally. The FBI and Department of Energy disagree. But almost everyone—apart from the FBI—has low confidence in their respective views. That is not quite guesswork. But it is probably not what most people imagine when they think of intelligence. ■