How Our Minds Tell Fact from “Fake News”

In a world of “alternative facts” and identity hackers, science writer Sharon Begley explores the science of how our minds determine if something is true.

Stocksy

The email seems to offer a solution to a problem you weren’t sure you had but that you’d heard of: Spy Wiper has found a long list of malware, spyware, and other threats on your computer, but if you call the toll-free phone number, a technician—who has kindly asked for remote access to your computer—will walk you through the steps needed to disinfect your machine.

You might think you’re too smart to be swindled by this or other scams. But not everyone is so fortunate. Microsoft estimates that this and similar tech scams (which in fact either upload malware to your computer, charge hundreds of dollars to remove nonexistent or planted bugs, or exploit the access you’ve provided to steal your identity or financial information) net their perpetrators $1.5 billion a year. Facebook “love scams,” in which criminals posing as US service members prey on the credulous and soft-hearted and get people to wire money so they can fly back to the US, netted $362 million in 2018. Some victims—well-educated, productive members of their communities—have lost tens of thousands of dollars to this fraud.

How the mind processes information has long been a focus of cognitive psychology, but now researchers are pursuing a specific puzzle: After that processing, how does the mind assess whether the information is true? What makes some people more gullible than others? And how can we navigate a “post-truth” world, where political and even health and science debates are framed not by anything so quaint as shared facts but by emotions and tribalism?

What to Do When Facts Aren’t Facts Anymore

“Figuring this out is especially important now,” said psychologist Christian Unkelbach of Germany’s University of Cologne. People are less trusting of “traditional cues to truth [such as] a newspaper or textbook or encyclopedia.” Scientists, historians, and others who were once trusted to tell us fact from fiction “need to understand how people come to believe information.” Or, as Unkelbach and colleagues argued in a 2019 paper, “In a world of ‘alternative facts’ and ‘fake news,’ it is paramount to understand the psychological processes by which people come to believe information.”

I’ll go out on a limb here and say objective facts exist. Something happened or it didn’t. Something exists or it doesn’t. This tree is the tallest in the forest, this caller is a friend in trouble, the Nazis did have death camps, the climate is warming due to human activity, green plants do turn sunlight into energy via photosynthesis.

In study after study, people evaluate repeated information as truer than novel statements, and truer than they did the first time they heard it.

Let’s get one uncomfortable reality out of the way. It’s easy for educated readers to believe gullibility is a problem for other people, whereas they themselves dispassionately evaluate claims by seeking out objective information, deploying their reasoning skills, and thereby separating fact from fiction. The problem with that belief is that the world has become too complex for anyone to know everything through firsthand observation and accumulated knowledge. We must therefore rely on experts. If you have a view on the safety of childhood vaccines, or the size of Donald Trump’s inaugural crowd, let me politely suggest that at least part of that view is (unless you are a developmental neuroscientist or were at the Capitol steps on January 20, 2017, and are a really good crowd counter) based on whom you have decided to trust.

That’s what I mean by tribalism: We identify some people as “like us” (for reasons we will have to leave to a future column). We observe that people like us believe X and not Y. When we have no personal expertise on something, we default to the tribe’s position. We sometimes, however, skip the overt tribal step and believe an assertion because it fits with our other beliefs.

If that seems like an artifact of prehistoric days, and surely one the brain would have overcome by 2020, think again. All of the psychological foibles that contribute to gullibility have roots in the brain’s evolutionary past.

Beyond tribalism and concordance with existing beliefs, our judgment of the truth of assertions reflects what psychologists call the repetition-induced effect. Simply put, the more often we hear an assertion, other things being equal, the more likely we are to believe it. That reflects a basic mental mechanism, namely, the effect of recognition and familiarity.

It’s Scientifically Proven: Repetition Makes Lies Sound True

In a typical lab experiment on the power of repetition, participants hear or read statements whose truth they can’t judge by personal expertise or by defaulting to what their ideological tribe believes, such as “the thigh bone is the human body’s longest.” They evaluate the truth of the statements by whatever criteria they like, including guessing. Days or even months later, they judge the truth of another set of statements, some from the previous list and some novel ones.

In study after study, people evaluate repeated information as truer than novel statements, and truer than they did the first time they heard it. The rate of judging statements like that about the thigh bone “true” on first exposure is a little less than half; on second exposure it is close to 70%, Unkelbach said.

It’s a powerful effect: Research participants judge repeated statements from sources they were warned are not honest as more true than novel statements from sources whose credibility was not characterized. 

The human brain evolved to treat information it can easily process as truer than information it struggles to understand.

The power of repetition as a proxy for truth has been recognized for decades; philosopher Ludwig Wittgenstein ridiculed it, calling it equivalent to buying “several copies of the morning paper to ensure that the content is true.” But only now are psychologists figuring out the reasons.

One is recognition or familiarity. The human brain evolved to treat information it can easily process as truer than information it struggles to understand; the latter has become a red flag for, “this might not be so.” In fact, when people read “The thigh bone is the longest bone in the human body” in Apple Chancery font, they judge it as less true than “The thigh bone is the longest bone in the human body” in good ol’ Times Roman. This “fluency effect” partly explains why statements we have heard before are judged as truer: Because the brain has a memory of them, Unkelbach said, they are more easily processed.

In simpler times, that mechanism served the brain well. “Repetition and its psychological consequences—familiarity, recognition, fluency—are valid cues to truth,” Unkelbach said. And it seems to be so fundamental an aspect of brain function that people differ very little in their vulnerability to the repetition effect, at least when a statement has no ideological component or tribal association; not even higher intelligence weakens the repetition effect. Unfortunately, those with an ideological agenda exploit this effect, Unkelbach said, with “strategically repeated” claims, especially on social media.

How Emotions Influence What We Believe to Be True

Beyond repeated exposure, our emotional state also influences our credulity. If believing an assertion meets an emotional need (in the Facebook love scam, to feel wanted and useful; in tech scams, to feel safe; with deep fakes, to have our opinions confirmed), we are more likely to do so.

In fact, argues psychologist Joseph Forgas of Australia’s University of New South Wales, credulity is humans’ default cognitive setting. This is our “baseline strategy,” as he calls it, because learning from others has been adaptive throughout human history: We cannot learn everything we need through firsthand experience, so learning from others has been what Forgas calls “a major source of our evolutionary success.”

Overriding that default position, as well as the various needs we have to believe, starts with what the emotional brain tells the cognitive brain. The latter interprets positive mood as signaling safety but takes negative mood as a sign that something is amiss. Anger energizes us, for instance. Sadness “often functions as a mild alarm signal,” Forgas says. Fear puts the mind on high alert. All three negative emotions trigger more focused attention, which “produces more cautious [and] attentive” information processing, he says. As a result, the brain pays closer attention to the quality of arguments, a foundation of critical thinking and, if warranted, skepticism. 

In his experiments, participants read nonsense statements like “syntagm is the antonym of paradigm” and “good health imparts reality to subtle creativity.” Those made to feel a little sad, by watching a heartbreaking video or recalling a past personal tragedy, judged the statements as less true than did cheerful people. In practical terms, he doesn’t advocate manipulating your emotional state this way before, say, watching political advertisements, in hopes of being less gullible. But other steps can. Monitoring your mood and recognizing that full-on positive affect seems to make us more gullible, asking if believing a claim meets an emotional need (to, say, feel like a kind, generous person), recognizing whether accepting an assertion confirms a cognitive bias: Any and all can reduce gullibility. The mind may be evolutionarily wired for gullibility, but recognizing that is the crucial first step to rising above our cave-dweller brains.