Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations
Watch Live

Health

WHO Is Fighting False COVID Info On Social Media. How's That Going?

Open up any social media app on your phone and you'll see it: links to COVID-19 information from trustworthy sources. Here, a Twitter screen reads, "No, 5G isn't causing coronavirus."
Michele Abercrombie/NPR
Open up any social media app on your phone and you'll see it: links to COVID-19 information from trustworthy sources. Here, a Twitter screen reads, "No, 5G isn't causing coronavirus."

Open up any social media app on your phone and you'll likely see links to COVID-19 information from trustworthy sources.

Pinned to the top of Instagram's search function, the handles of the U.S. Centers for Disease Control and Prevention and the World Health Organization are prominently featured. Click and you'll find posts and stories how to keep safe during the pandemic.

In the home section of the YouTube app, there's a playlist of videos that promote vaccination and counteract vaccination misinformation from WHO, the Journal of the American Medical Association and GAVI, the Vaccine Alliance.

Advertisement

And on the Twitter app, you might spot a warning under posts with fake or misleading COVID-19 information. A tweet from a user falsely proclaiming that 5G causes coronavirus, for example, has a big blue exclamation mark with a message from Twitter: "Get the facts about COVID-19." It links to a story debunking the claim from a U.K. media outlet called iNews.

In the noisy news landscape, these are just some of the features launched by the tech industry to bring down COVID-19 misinformation and deliver facts to the public.

This effort didn't happen spontaneously. The World Health Organization sparked the efforts in Feb. 2020 in the early days of the coronavirus crisis. The U.N. agency teamed up with over 40 tech companies to help disseminate facts, minimize the spread of false information and remove misleading posts.

But there's one big question that's tough to answer: Is it working?

Have any of these efforts actually changed people's behavior in the pandemic — or encouraged them to turn to more credible sources?

Advertisement

Health messaging experts and misinformation specialists interviewed for this story praise WHO's efforts to reach billions of people through these tech industry partnerships. But they say the actions taken by the companies have not been enough — and may even be problematic.

Vish Viswanath, a professor of health communication in the department of social and behavioral sciences at the Harvard T.H. Chan School of Public Health, has been closely monitoring the global health content spread by the tech industry since the pandemic started.

"The WHO deserves credit for recognizing that the sheer flood of misinformation — the infodemic — is a problem and for trying to do something about it," he says. "But the tech sector has not been particularly helpful in stemming the tide of misinformation."

Researchers say there are limits to some of the anti-misinformation tactics used by social media companies.

Flagging or pulling down a problematic social media post often comes too late to undo the harm, says Nasir Memon, professor of computer science and engineering at New York University. His research includes cybersecurity and human behavior.

"It only comes after the post has gone viral. A company might do a fact check and put a warning label," he says. "But by then the ones who consumed that information already have been influenced in some way."

For example, in October, President Donald Trump claimed in a Twitter post that he had COVID-19 immunity after he was sick. According to the CDC: "There is no firm evidence that the antibodies that develop in response to SARS-CoV-2 infection are protective." The post was taken off Twitter after being flagged by fact-checkers — but not before it had been shared with millions of his followers.

And there are no guarantees that people are going to take the time to click on a link to credible sources to "learn more," as the labels suggest, says Viswanath.

These "learn more" and "for more information" COVID-19 labels can be found on almost every tech platform — yes, Twitter, Facebook and Instagram, but also Tinder, the dating app (every few swipes there are reminders to wash hands and observe physical distancing, with links to WHO messages) and Uber, the ridesharing app (a section on its website with rider safety information directs people to WHO for pandemic guidance).

"If I'm sitting in some community somewhere, busy with my life, worried about my job, worried about whether the kids are going to school or not, the last thing I want to do is go to a World Health Organization or CDC website," Viswanath adds.

WHO is aware these measures aren't perfect. Melinda Frost, with WHO's risk communication team, concedes that simply removing posts can create new problems. She shares a December study from the disinformation analytics company Graphika. It found that the crackdown on anti-vaccine videos on YouTube has led their proponents to repost the videos on other video-hosting sites like BitChute, favored by the far-right.

YouTube removes videos if they violate its COVID-19 policy. Videos that claim the COVID-19 vaccine kills people or will be used as a means of population reduction, for example, are not allowed. But other platforms may have less stringent policies.

"We may expect a proliferation of alternative platforms as fact checking and content removal measures are strengthened on social media," Frost says.

Researchers say it's hard to know whether any of these efforts have actually changed people's behavior in the pandemic — or encouraged them to turn to more credible sources.

Claire Wardle, U.S. director of First Draft, a nonprofit organization that researches misinformation, says "we have almost no empirical evidence about the impact of these interventions on the platforms. We can't just assume that things that seem to make sense [such as taking a post down or directing people to a trustworthy source] would actually have the consequences we would expect."

Andy Pattison, who leads WHO's digital partnerships in Geneva, says the organization is now trying to assess impact.

WHO is working with Google, for example, on a questionnaire for users to see whether the company's efforts have resulted in behavior change and/or increased knowledge regarding COVID-19. Since the early days of the crisis, Google has ensured that users searching for "COVID" or related terms on its search engine see official news outlets and local health agencies in its top results, says Pattison.

In the absence of current data, past research can shed some light on social media misinformation.

For example, an April 2020 study from the NYU Tandon School of Engineering found that warning labels — messages such as "multiple fact-checking journalists dispute the credibility of this news" — can reduce people's intention to share false information. The likelihood, however, varied depending on the participant's political orientation and gender.

Memon, the lead author of the report, says the findings are relevant to social media policing in the pandemic. "Fact checking [on social media platforms] is going to become an important aspect of what we do as a society to help counter the spread of misinformation," he says.

Both Memon and Viswanath say with tens of millions of posts being shared on social media a day, companies need to scale up efforts to take down false information.

"They have the power. They have the reach. They should be more aggressive and active than they have been," says Viswanath.

Memon suggests that companies could deploy stronger mechanisms to verify users' identities. That could help prevent people from creating troll accounts to anonymously spread falsehoods and rumors, he says. And Viswanath suggests that tech companies hire teams of experts — ethicists, researchers, scientists, doctors — for advice on how to handle false information.

As for WHO, it's learned a key lesson during the pandemic. "Information alone is not going to shift behavior," says Frost, who has been working on WHO campaigns to debunk unjustified medical claims on social media.

So over the past few months, the organization has been gathering a group of sociologists, behavioral psychologists and neuroscientists to study how information circulates, how it can be managed — and how it can change people's minds.

"A lot of what we know about behavior change really requires something closer to the individual — making sure the information we have is relevant to individuals and makes sense in their lives," she says.

Copyright 2021 NPR. To see more, visit https://www.npr.org.