Skip to content

The Problem with YouTube’s Algorithm

alexander-shatov-niUkImZcSP8-unsplash

As a content sharing platform, YouTube has rules that its users must follow. Videos posted on the platform have to follow YouTube’s Community Guidelines, which cover everything from spam and external links to misinformation and hate speech. YouTube is a massive platform, though, with 2 billion monthly active users and more than 500 hours of content uploaded every minute.

It’s not surprising if some videos fall through the cracks.

But it’s not just a few videos that violate YouTube’s policies and stay on the platform. In the last several years, it’s become a systemic problem for the network. YouTube’s algorithm has even been found to recommend videos that go against the guidelines.

So what’s up? Why does YouTube’s algorithm recommend videos that violate its own policies?

An investigation into YouTube’s algorithm

In July 2021, YouTube’s algorithm made headlines when the company Mozilla released a report stating that the algorithm appeared to recommend videos that aren’t in line with YouTube’s guidelines.

Mozilla is the company that makes the Firefox web browser, and in 2020, it decided to conduct a study on the YouTube algorithm. YouTube has been criticized in the past for its lack of transparency regarding its recommendations, and the Mozilla study was the largest-ever investigation into YouTube’s algorithm.

The data for the study came from thousands of YouTube users who agreed to use an open-source browser extension called RegretsReporter. The extension tracked the videos the users watched and asked whether they regretted watching each video.

The results from the study show some serious issues with YouTube’s recommendation algorithm.

Videos with misinformation were the biggest issue 

The study took place between July 2020 and May 2021 and throughout that time, 37,380 users flagged 3,362 videos as regrettable. The researchers watched these flagged videos to check them against YouTube’s Community Guidelines. They indicated 12% as videos that shouldn’t be on YouTube or shouldn’t be recommended by the algorithm.

The most frequently found issues with the reported videos were:

  • Misinformation
  • COVID-19 misinformation
  • Violent or graphic content
  • Hate speech

The algorithm isn’t working properly

One important insight from the study is that 71% of the regretted videos were recommended by YouTube, not searched for by users. Recommended videos were 40% more likely to be regretted than searched-for videos.

The researchers also noted that in 44% of cases they had data about videos a volunteer watched before reporting regret, and the recommendation was completely unrelated to previous videos the volunteer watched. That means that almost half of the time, YouTube is recommending videos that have nothing to do with a user’s watch history and that users don’t want to watch.

If these recommended videos are unrelated and users regret watching them, why is YouTube’s algorithm suggesting them? The Mozilla study also found that recommended and then regretted videos were acquiring 70% more views per day than other videos the volunteers watched.

When using their figures to determine the harm YouTube’s algorithm does online, the Mozilla researchers said it’s complicated. For them, many of the reported videos fall into a “borderline content” category that comes close to violating the Community Guidelines without actually violating them.

YouTube’s algorithm is worse with non-English languages

One final insight from the Mozilla study that raises eyebrows is that the rate of regrettable videos was 60% higher in countries that do not have English as a primary language. The researchers attribute this glaring difference to the fact that YouTube’s algorithm trains on primarily English-language videos.

Misinformation on YouTube: A longstanding issue

Mozilla isn’t the only organization to carry out an investigation of YouTube’s algorithm. Several other studies have been done in recent years, including:

What sets the Mozilla study apart from other investigations is its crowdsource approach and size. The company was able to capture hard figures related to YouTube’s recommendation algorithm.

Has YouTube done anything about misinformation?

The Mozilla study was eye-opening and reinforced questions that many critics have been asking of YouTube for years. The investigation ended in May 2021 and the report was released two months later. Has YouTube made any progress since then?

Not so much.

Although the platform has made more efforts at transparency and clamping down on the spread of misinformation, experts say more can be done. YouTube did announce in April 2021 that it now uses a “violative view rate” metric indicating how many videos of every 10,000 uploaded to the platform violate YouTube’s rules. The numbers were down to 18 per 10,000 at the end of 2020. YouTube’s executives herald this as progress in the fight against inappropriate content on the platform but as Rebecca Heilweil from Vox points out, YouTube’s internal moderators decide what goes against the Community Guidelines, not independent auditors.

In October 2021, YouTube started adding content policies related to COVID-19 and vaccine misinformation — almost 2 years after the start of the pandemic. A CNN Business report noted that anti-vax conspiracy theorists Dr. Sherri Tenpenny and Dr. Joseph Mercola were only banned on YouTube shortly before the new COVID-19 content rules went into effect.

Recent calls for more transparency

When it comes to transparency and fact-checking, YouTube still has some work to do. In a January 2022 open letter to YouTube’s chief executive, Susan Wojcicki, more than 80 fact-checking organizations claimed YouTube is a conduit of worldwide disinformation and misinformation. The fact-check groups requested YouTube make four important changes to promote transparency:

  1. Commit to funding independent research on disinformation campaigns on YouTube.
  2. Put links to rebuttals inside videos that distribute disinformation and misinformation.
  3. Stop algorithms from promoting repeat offenders.
  4. Put more effort toward tackling falsehoods in non-English-language videos.

The letter’s signatories came from more than 40 countries across different types of organizations, from charities and foundations to privately funded groups.

How YouTube’s inappropriate videos get removed

YouTube’s algorithm is supposed to flag inappropriate content automatically, and human moderators review the flags and decide whether to take down the video. Users can also report videos they believe violate YouTube’s policies.

Getting flagged isn’t the same as getting removed, as a moderator still has to review the video. Takedowns happen on a case-by-case basis and typically, one report of violative content isn’t enough to get a video taken down unless the content is particularly egregious. Channels that get videos taken down get a strike, and three strikes within a 90-day period is cause to delete the channel entirely.

During the fourth quarter of 2021, approximately 3.75 million videos were removed from YouTube. Only 300,000 of those videos were taken down via flags from non-automated flagging systems. The top reason for video removal during this quarter was child safety, which accounted for 32% of removals. The countries with the highest number of removals were India, the United States, Indonesia, and Brazil.

Despite the removal of all the videos, critics still say YouTube could do more to correct its algorithm and promote transparency.

How can YouTube’s algorithm be fixed?

YouTube could do with more transparency when it comes to moderation. Tightening rules and policies is all well and good, but if the platform never submits to an external audit of its moderation practices, it’s hard to have a lot of confidence in YouTube.

The Mozilla researchers had some recommendations based on their findings. They think YouTube should:

  • Publish reports on transparency and information about recommendation algorithms
  • Let users opt out of personalized recommendations
  • Create a risk management system to deal with the recommendation AI

YouTube does publish quarterly transparency reports and released its first copyright transparency report at the end of 2021. They don’t provide a lot of information about the recommendation algorithm, however.

The responsibility doesn’t fall squarely on YouTube, either. Policymakers can introduce laws that force platforms like YouTube to be more transparent about the AI systems they use.

The future for YouTube

YouTube has made efforts at greater transparency in the last few years. There’s still more to be done, though. Will the platform ever heed its critics and implement independent, third-party reviews of its practices? Perhaps, but it may not happen for a long time.

Related Articles

All
  • All
  • Easy Prey Podcast
  • General Topics
  • Home Computing
  • IP Addresses
  • Networking
  • Online Privacy
  • Online Safety
Cyber security risks are always evolving.

Cyber Security Risks: An Ever-Evolving Challenge

Using any technology comes with risks. Understanding that risk and how it evolves as technology evolves is…

[Read More]
Corporate espionage is effective, but not as sophisticated as you might think.

The Secrets of Corporate Espionage

Corporate espionage is alive and well, and not nearly as sophisticated as you might think. Competitors are…

[Read More]

Technology from James Bond Movies that Exists Now

James Bond movies first hit the silver screen in 1962 with the release of Dr. No. Based…

[Read More]

The Dangerous Evolution of Ransomware

The phrase “ransomware” strikes terror into the figurative hearts of corporate heads and IT professionals. A dark,…

[Read More]

Is Your Boss Allowed to Track Your Internet Usage?

With so many people working from home now, one big question employees have started asking is: Can…

[Read More]

Why You Should Have a Cyber Defense Plan

If you’re not aware of and addressing blind spots in your cybersecurity, you can’t prevent or mitigate…

[Read More]