EXPERT PERSPECTIVE — In recent times, the nationwide dialog about disinformation has usually centered on bot networks, overseas operatives, and algorithmic manipulation at industrial scale. These issues are legitimate, and I spent years inside CIA learning them with a stage of urgency that matched the stakes. However an equally vital story is enjoying out on the human stage. It’s a narrative that requires us to look extra intently at how our personal instincts, feelings, and digital habits form the unfold of data.
This story reveals one thing each sobering and empowering: falsehood strikes quicker than fact not merely due to the applied sciences that transmit it, however due to the psychology that receives it. That perception is not simply the instinct of intelligence officers or behavioral scientists. It’s backed by onerous knowledge.
In 2018, MIT researchers Soroush Vosoughi, Deb Roy, and Sinan Aral revealed a groundbreaking examine in Science titled The Unfold of True and False Information On-line. It stays one of the crucial complete analyses ever carried out on how data travels throughout social platforms.
The crew examined greater than 126,000 tales shared by 3 million folks over a ten-year interval. Their findings have been putting. False information traveled farther, quicker, and extra deeply than true information. In lots of instances, falsehood reached its first 1,500 viewers six occasions quicker than factual reporting. Probably the most viral false tales routinely reached between 1,000 and 100,000 folks, whereas true tales hardly ever exceeded a thousand.
One of the vital vital revelations was that people, not bots, drove the distinction. Individuals have been extra prone to share false information as a result of the content material felt recent, shocking, emotionally charged, or identity-affirming in ways in which factual information usually doesn’t. That human tendency is changing into a nationwide safety concern.
For years, psychologists have studied how novelty, emotion, and id form what we take note of and what we select to share. The MIT researchers echoed this of their work, however a broader physique of analysis throughout behavioral science reinforces the purpose.
Individuals gravitate towards what feels surprising. Novel data captures our consideration extra successfully than acquainted information, which implies sensational or fabricated claims usually win the primary click on.
Emotion provides a strong accelerant. A 2017 examine revealed within the Proceedings of the Nationwide Academy of Sciences confirmed that messages evoking robust ethical outrage journey by means of social networks extra quickly than impartial content material. Worry, disgust, anger, and shock create a way of urgency and a sense that one thing have to be shared shortly.
And id performs a delicate, however vital position. Sharing one thing provocative can sign that we’re properly knowledgeable, notably vigilant, or aligned with our group’s worldview. This makes falsehoods that flatter id or affirm preexisting fears notably highly effective.
Taken collectively, these forces kind what some have referred to as the “human algorithm,” that means a set of cognitive patterns that adversaries have discovered to use with rising sophistication.
Save your digital seat now for The Cyber Initiatives Group Winter Summit on December 10 from 12p – 3p ET for extra conversations on cyber, AI and the way forward for nationwide safety.
Throughout my years main digital innovation at CIA, we noticed adversaries broaden their technique past penetrating networks to manipulating the folks on these networks. They studied our consideration patterns as intently as they as soon as studied our perimeter defenses.
International intelligence providers and digital affect operators discovered to seed narratives that evoke outrage, stoke division, or create the notion of insider information. They understood that emotion may outpace verification, and that velocity alone may make a falsehood really feel plausible by means of sheer familiarity.
Within the present panorama, AI makes all of this simpler and quicker. Deepfake video, artificial personas, and automatic content material technology permit small groups to provide giant volumes of emotionally charged materials at unprecedented scale. Current assessments from Microsoft’s 2025 Digital Protection Report doc how adversarial state actors (together with China, Russia, and Iran) now rely closely on AI-assisted affect operations designed to deepen polarization, erode belief, and destabilize public confidence within the U.S.
This tactic doesn’t require the viewers to consider a false story. Usually, it merely goals to go away them not sure of what fact seems to be like. And that uncertainty itself is a strategic vulnerability.
If misguided feelings can speed up falsehood, then a considerate and well-organized response might help guarantee factual data arrives with higher readability and velocity.
One strategy entails rising what communication researchers generally name fact velocity, the act of getting correct data into public circulation shortly, by means of trusted voices, and with language that resonates relatively than lectures. This doesn’t imply replicating the manipulative emotional triggers that gas disinformation. It means delivering fact in ways in which really feel human, well timed, and related.
One other strategy entails small, sensible interventions that cut back the impulse to share doubtful content material with out considering. Analysis by Gordon Pennycook and David Rand has proven that transient accuracy prompts (small moments that ask customers to think about whether or not a headline appears true) meaningfully cut back the unfold of false content material. Equally, cognitive scientist Stephan Lewandowsky has demonstrated the worth of clear context, cautious labeling, and easy corrections to counter the highly effective pull of emotionally charged misinformation.
Join the Cyber Initiatives Group Sunday publication, delivering expert-level insights on the cyber and tech tales of the day – on to your inbox. Join the CIG publication at present.
Organizations can even assist their groups perceive how cognitive blind spots affect their perceptions. When folks know the way novelty, emotion, and id form their reactions, they develop into much less prone to tales crafted to use these instincts. And when leaders encourage a tradition of considerate engagement the place colleagues pause earlier than sharing, examine the supply, and spot when a narrative appears designed to impress, it creates a ripple impact of extra sound judgment.
In an surroundings the place data strikes at velocity, even a short second of reflection can gradual the unfold of a harmful narrative.
A core a part of this problem entails reclaiming the psychological house the place discernment occurs, what I consult with as Thoughts Sovereignty™. This idea is rooted in a easy follow: discover when a bit of data is making an attempt to impress an emotional response, and provides your self a second to guage it as an alternative.
Thoughts Sovereignty™ just isn’t about retreating from the world or changing into disengaged. It’s about navigating a loud data ecosystem with readability and steadiness, even when that ecosystem is designed to tug us off stability. It’s about defending our capacity to suppose clearly earlier than emotion rushes forward of proof.
This internal steadiness, in some methods, turns into a public good. It strengthens not simply people, however the communities, organizations, and democratic methods they inhabit.
Within the intelligence world, I at all times thought that fact was resilient, nevertheless it can’t defend itself. It depends on leaders, communicators, technologists, and extra broadly, all of us, who select to deal with data with care and intention. Falsehood could benefit from the benefit of velocity, however fact positive factors energy by means of the standard of the minds that carry it.
As we develop new applied sciences and confront new threats, one query issues greater than ever: how will we strengthen the human algorithm in order that fact has a preventing likelihood?
All statements of reality, opinion, or evaluation expressed are these of the writer and don’t replicate the official positions or views of the U.S. Authorities. Nothing within the contents must be construed as asserting or implying U.S. Authorities authentication of data or endorsement of the writer’s views.
Learn extra expert-driven nationwide safety insights, perspective and evaluation in The Cipher Temporaryas a result of Nationwide Safety is Everybody’s Enterprise.
