Joe Rogan, Spotify: how tech firms tackle Covid misinformation

Wednesday, 2 February 2022 15:12 GMT

A smartphone is seen in front of a screen projection of Spotify logo, in this picture illustration taken April 1, 2018. REUTERS/Dado Ruvic/Illustration

Image Caption and Rights Information

* Any views expressed in this article are those of the author and not of Thomson Reuters Foundation.

Why is misleading and false information on the coronavirus pandemic so widespread online, and what are tech platforms doing to tackle it? Here are some answers

The controversy surrounding U.S. podcaster and vaccine sceptic Joe Rogan, whose top-rated Spotify show prompted protests by singers and scientists alike, has reignited debate on how online platforms police Covid-19 misinformation.

Spotify is the latest tech firm to come under pressure for airing false claims and conspiracies since the start of the coronavirus pandemic - an issue that the World Health Organisation has said was costing lives.

While some companies have curbed harmful content, critics say their efforts have not gone far enough.

Here is all you need to know:


The case erupted last week, when singer-songwriter Neil Young asked Spotify to take his music off the platform, saying it was spreading misinformation by streaming Rogan's show. 

More complaints soon followed, with artists Nils Lofgren and Joni Mitchell requesting their music be pulled from the streaming service, and Prince Harry and his wife Meghan also voicing concern

Rogan has drawn criticism for saying young people don't need vaccination and for hosting proponents of ivermectin, an anti-parasite drug with no proven benefit against COVID-19. He has since apologised, pledging to bring more balance to his show.

On Sunday, Spotify said it would add a content advisory to any episode that discussed COVID, directing listeners to a hub containing facts and information from medical and health experts, as well as links to trusted sources.

The measure has proved controversial.

"Platforms should know they can't solve misinformation by slapping on a simple warning label," said Willmary Escoto a policy analyst for Access Now, a digital rights group.

"And there is no clear evidence that content labels and links to information hubs reduce the spread of misinformation."

Rogan is also a loud voice to counter. The former comedian attracts more listeners than any other Spotify podcast, with an audience estimated at more than 10 million.


A 2018 Twitter study found that falsehoods spread faster than truth on the site not because of its algorithms, but because of its users; the authors speculated that the novelty of much false news made it more appealing to share. 

A separate 2020 study found social-media users seemed to care more about winning likes than being accurate when they opted to share information about COVID-19.

Politicians and celebrities also play a key role.

A 2020 analysis by Oxford University researchers found that misleading posts by public figures were only a fraction of the total but drew the most engagement.

All this has helped create what the WHO calls an "infodemic" - an overload of information, some of it false or misleading, causing confusion and "risk-taking behaviours that can harm health".


Since much of the misleading content lives online, hosting platforms enable it to spread, with critics saying firms favour engagement over accuracy and do too little to intervene.

Last July, U.S. President Joe Biden said social media platforms like Facebook were "killing people" for allowing misinformation about coronavirus vaccines to be posted online.

Under increasing pressure to police false content, companies tightened their rules, but balancing the rights to free expression with moderating harmful content has proved difficult.

"Covid misinformation isn't an easy thing to tackle," said Anna George, a researcher on democracy and technology at the Oxford Internet Institute, noting the challenge of even defining what constitutes online harm.


Technology companies have invested in human moderators as well as artificial intelligence technology to identify problematic content over the past few years.

Both systems have pitfalls, as human moderators can be overwhelmed by the amount of content to check, while AI often struggles to understand nuance and context. 

Podcasts have proved particularly tricky to monitor, as tools to detect problematic audio content lag behind those used to identify text, and transcribing and examining recorded voice chats is more cumbersome.

When false or misleading information is found, tech firms can remove it, attach warnings and ban or suspend its creator.

Educating people to spot false news has also proved to be effective, said George, who sees no magic bullet.

"There's no single one best approach ... (but) all those approaches together can lead to platforms better tackling misinformation."


The delivery apps making gig work a 'digital wild west' 

Abkhazia's illegal bitcoin miners face new threat from thieves 

Online content moderation: Can AI help clean up social media? 

(Reporting by Umberto Bacchi @UmbertoBacchi, Editing by Lyndsay Griffiths. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit

Update cookies preferences