The machine runs amok The machine runs amok The machine runs amok
Twisted facts, fake news and social media spoofs can turn society upside down. One UW team is working to help us through the infodemic.
Twisted facts, fake news and social media spoofs can turn society upside down. One UW team is working to help us through the infodemic.
It’s January. Armed supporters of President Trump have just stormed the U.S. Capitol; a deadly and dramatic siege unfolds on national TV and grips the nation. In this emotionally charged moment, a screenshot of an email from University of Washington President Ana Mari Cauce emerges on Facebook. In it, she announces the UW will remove all American flags from the Tacoma campus for the safety of staff and students.
The message has the earmarks of spoofing, an Internet scam. For example, it incorrectly identifies Cauce in her signature as “Interim President of UW Tacoma.” But that doesn’t stop this fabricated story from being shared, mostly in Facebook groups outside Seattle and the University community.
Emails from concerned alumni and community members trickle in, some requesting verification, others expressing anger and disgust. The Tacoma campus issues a disclaimer on its official Facebook page to squash the story before it spreads to a wider audience. Eventually, the rumor loses traction.
The rapid spread of this rumor is a small example of a much bigger problem, an “infodemic” of false and misleading information that has infected our digital and in-person social spaces. Some of it is misinformation that we unintentionally spread when we hit retweet without vetting the source or when we share a post from a trusted friend that later turns out to be untrue. Some of it is more malicious: disinformation deliberately sown by third-party actors to inflame passions and cause chaos. And the consequences, as with those Americans refusing the COVID-19 vaccine, can be deadly. What is hardest to reverse is the continued decline of trust, both in our institutions—and in each other.
If misinformation and disinformation are deadly, is there an easy antidote? “I’m not optimistic, but I’m hopeful,” says Kate Starbird, the new faculty director of the UW’s Center for an Informed Public (CIP) and associate professor in the Department of Human Centered Design & Engineering. “I’m hoping we can make an impact, but these are big challenges we’re facing.”
In December 2019, Starbird and her colleagues launched the CIP as a response to the rise in disinformation and the erosion of trust in our democratic institutions like the media, electoral systems and universities. Together, they have decades of expertise in studying misinformation during crises and misinformation in science. In many ways, they were perfectly equipped to make sense of the myriad conspiracy theories—about the COVID-19 vaccines, social justice protests and the contentious presidential election to name just a few—that emerged in our news feeds and at family dinner tables over the next 18 months.
* * *
From the start of the pandemic, Kolina Koltai, a post-doctoral researcher at the CIP, was prepared for the eventual flood of the vaccine misinformation. “As long as there have been vaccines, there has been an anti-vaccine movement,” she says. For her dissertation, Koltai examined how people make decisions about childhood vaccinations based on the information they get from their social networks. She anticipated the same arguments about COVID-19 vaccine mandates and side effects would resurface.
What is different is that COVID-19 vaccine mis- and disinformation doesn’t lurk in the private Facebook parent groups where anti-vaccination content had previously taken hold. It is out in the open and rapidly spreading, almost as quickly as the virus. “The pandemic is a worldwide talking point, so it gives people who are bad actors in these spaces plenty of opportunity to spread misinformation,” Koltai says.
These bad actors have come in different forms, from traditional media influencers with large platforms like Fox News pundit Tucker Carlson to seemingly trustworthy experts like Joseph Mercola, an osteopathic physician who promotes natural health remedies. A recent report from the Center for Countering Digital Hate, an international nonprofit that works to disrupt the use of digital spaces to spread identity-based hate and misinformation, determined that just twelve people were responsible for nearly two-thirds of the anti-vaccine content on social media platforms.
Many of the influencers who flooded social media with COVID-19 conspiracy theories and anti-vaccination content had also been key players in 2020’s other big misinformation superspreader event: the presidential election. This doesn’t surprise Starbird, who says these movements were fused together online because “the way social media networks work … is who’s following whom and which groups are connected to each other.” Those connections are then strengthened by the recommendation algorithm, which may suggest like-minded people or groups similar to the ones you’re in.
“They know these spaces like the back of their hand,” postdoctoral researcher Rachel Moran says about the influencers deploying these tactics. “They’ve gamed the system.”
Because of the pandemic’s global reach, misleading stories have added staying power, Koltai says. In April, news outlets published a story about a healthy Florida doctor who died weeks after receiving a COVID-19 vaccine, leading health officials to investigate the death (the medical examiner later ruled it was unclear whether the vaccine was connected to the death). If the doctor had died after receiving a flu shot, the story may have been a blip on the news radar, Koltai says. Instead, this story became one of Facebook’s most shared articles in early 2021.
Misleading stories are compelling because they manipulate a kernel of truth. For example, Koltai says, you’ll often find people referencing data from the CDC’s vaccine adverse reporting system, which compiles reports of vaccine-related injuries and deaths, to argue against a vaccine. What they fail to disclose is that these unverified reports can be filed by anyone. This kind of decontextualization, says Koltai, is a classic misinformation technique.
Although social media platforms like Facebook, Instagram and Twitter have stepped up their efforts to label and take down vaccine misinformation, they haven’t been entirely successful. In part, that’s because the content creators are employing strategies to avoid moderation, explains Rachel Moran, another postdoctoral researcher at the CIP. These range from sharing screenshots instead of links to using coded language like referring to “pharmaceuticals” instead of “vaccines,” for example.
“They know these spaces like the back of their hand,” Moran says about the influencers deploying these tactics. “They’ve gamed the system.” She wants to see the platforms less reactionary and more proactive, thinking about how each new feature they roll out could be misused to spread disinformation.
* * *
On January 6, many Americans were shocked when an armed mob attacked the U.S. Capitol to prevent the election certification process, but the UW researchers were not. In summer 2020, they had joined forces with other research entities including a lab at Stanford to form the Election Integrity Partnership, a real-time effort to detect and respond to election-related mis- and disinformation. It was a massive undertaking, one that involved more than a hundred people in multiple time zones sifting through tweets and viral videos to track and analyze emerging narratives. What they uncovered was a months-long campaign of disinformation about a stolen election, largely created by political leaders including Donald Trump, but fed and sustained by ordinary people who were primed to look for instances of voter fraud.
“There’s this echo effect between the audiences and the elites as they co-create false narratives,” Starbird explains. This kind of participatory disinformation hinges on unwitting agents, everyday people who don’t realize they’re part of an organized campaign but find evidence to support it.
What became known as “Sharpiegate” began, Starbird says, with someone raising a question about whether their ballot would count because the marker they had used had bled through the paper. “In the first few tweets you can see more of a natural, sense-making process happening,” Starbird says. That narrative shifted on election day when Fox News called the state of Arizona for Joe Biden.
A Facebook video claimed that Arizona voters in conservative precincts were forced to mark their ballots with Sharpie pens, rendering them unreadable by the voting machines. The story was amplified by social media influencers with large followings because it was politically useful. Suddenly, Sharpiegate became a rallying cry for Trump supporters, who converged on the Maricopa County Recorder’s Office building in protest, shouting “stop the steal” and demanding a recount.
Those narratives weren’t being spread just in English, either, says CIP postdoctoral fellow Moran, who is part of a project analyzing the spread of misinformation in the Vietnamese American community. Some of her early findings indicate that social media platforms aren’t invested in moderating non-English content, leaving a void that could be further exploited by bad actors.
* * *
Misinformation is here to stay, Starbird says. Seeking out information is a natural human response to uncertainty. “Rumors are a byproduct of the sense-making process,” she says. “You’re trying to come up with explanations, and sometimes you get things wrong.”
We can avoid becoming unwitting participants in making something larger and more insidious. We can learn to be better at consuming and sharing things we see online. Hit pause before you hit share—and think about where the information is coming from, says Starbird.
It’s also important to evaluate the level of trust you have built with someone you know only online, says Moran. If a fitness influencer or fellow parent you follow on Instagram suddenly starts posting about vaccines, you may feel you can trust them because they’re part of your daily routine of media consumption. “I would think through what’s at stake for you in trusting this information—and what’s at stake for that person giving it to you,” Moran recommends.
If you’re dealing with someone you know—maybe a family member or friend—the best advice Starbird can offer is to start from a place of empathy because we’re all vulnerable. “If we’re online, we’re probably sharing misleading or false information that makes us outraged,” she says. “Maybe we correct it later or maybe we never even learn it was false.” If someone’s mind is made up one way or another, there may not be much you can do to convince them, so you may have to evaluate whether it’s worth the effort.
“There has to be some kind of civic component that helps us see that in the long term we’re destroying things we care about and working against ourselves”
Kate Starbird, Center for an Informed Public director
Media literacy alone isn’t enough to turn the tide against misinformation, Starbird says. “There has to be some kind of civic component that helps us see that in the long term we’re destroying things we care about and working against ourselves.” There’s also plenty of work to be done by the social media platforms, whether it’s limiting accounts that spread misinformation or having clear policies on how they deal with disinformation.
It’s a long road ahead, but Starbird and her CIP colleagues are hopeful a collective effort from all of us—researchers, policymakers and users—can at least contain, if not eliminate, the infodemic.
“I try to see misinformation not as a barrier, just a hurdle,” Moran says. “There is always hope for us in the future. Otherwise, you couldn’t do this work every day, if you didn’t believe people could change their minds.”