Since the onset of the coronavirus pandemic in February 2020, YouTube has removed more than 1 million videos related to “dangerous coronavirus information,” the company said Wednesday.
The removed videos included content that YouTube judged was misinformation about the pandemic, like promotion of a false cure or videos calling the pandemic a hoax, Chief Product Officer Neal Mohan said in a blog post Wednesday.
Such videos violated the company’s policy prohibiting “any videos that can directly lead to egregious real world harm,” Mohan said.
He explained that while combatting misinformation remains a priority for the company, with nearly 10 million videos removed each quarter, simply removing videos does not do enough to prevent the some 2 billion users on YouTube’s platform from accessing “harmful misinformation.” In addition to removing videos, YouTube is “reducing the spread of videos with harmful misinformation” and manipulating search results to be “optimized for quality.”
“Speedy removals will always be important but we know they’re not nearly enough. Instead, it’s how we also treat all the content we’re leaving up on YouTube that gives us the best path forward,” Mohan said.
Mohan acknowledged that YouTube’s policies are unlikely to satisfy critics on both the left and the right.
Conservatives have accused YouTube and other tech companies with social media platforms like Facebook and Twitter of censoring content based on political biases. Conservative content creators have had accounts suspended for running afoul of content moderation policies targeting politically incorrect “hate speech,” or have been penalized for sharing controversial opinions about the 2020 presidential election, or the coronavirus pandemic. Republicans have accused tech companies of muzzling free speech and states like Florida and Texas have acted to ban social media censorship.
Progressives, on the other hand, believe that tech companies are not doing enough to limit the spread of misinformation on social media, and have threatened to remove the liability protections tech companies enjoy for failures to censor information they claim will endanger people.
Mohan acknowledged concerns about the “chilling effect on free speech” that comes from an “overly aggressive approach towards removals.” He said social media companies need a “clear set of facts” in order to identify bad content, and that it is now always easy to know what is true.
“For COVID, we rely on expert consensus from health organizations like the CDC and WHO to track the science as it develops. In most other cases, misinformation is less clear-cut. By nature, it evolves constantly and often lacks a primary source to tell us exactly who’s right,” he wrote.
“In the absence of certainty, should tech companies decide when and where to set boundaries in the murky territory of misinformation? My strong conviction is no.”
He stated society is better off with “open debate.”
“One person’s misinfo is often another person’s deeply held belief, including perspectives that are provocative, potentially offensive, or even in some cases, include information that may not pass a fact-checker’s scrutiny,” Mohan continued. “Yet, our support of an open platform means an even greater accountability to connect people with quality information. And we will continue investing in and innovating across all our products to strike a sensible balance between freedom of speech and freedom of reach.”