How YouTube can Make The Algorithm Work For Everyone

Edwin Chalas
3 min readDec 3, 2020

--

This was written for a class assignment.

YouTube is the internet’s most popular video-sharing platform; because of this, their algorithm not only moderates content to prevent illegal stuff being uploaded, but also to suppress content that may be controversial. The latter is especially important given the never-ending deluge of controversy due to pockets of creators uploading either misinformation or outright obscene content.

The algorithm not only processes video titles and descriptions, but the videos themselves. YouTube’s automated system looks at keywords in text or speech to scan for controversial content, while also using their ContentID system to quickly identify copyrighted content when it appears.

YouTube has community guidelines — rules against certain types of content. These guidelines are split up into four categories — spam, sensitive content (like nudity, child safety), violent content, and regulated goods (content including firearms, drugs, etc). In addition, YouTube’s ContentID system prevents the upload of most copyrighted content (or if allowed, removing the opportunity to monetize of said content)

The algorithm has never been fully explained by YouTube; though the community as a whole has figured out parts of it. For example, YouTubers have collectively figured out what words can lead to a video being “demonetized” (when ads are blocked from running on the content, due to presumably not advertiser-safe stuff), and have even made sites where users can check if their content is OK themselves. Beyond these pockets of understanding, however, the algorithm is an enigma. Creators have assumed that because of YouTube’s use as an advertising platform, decisions in the algorithm are typically to favor more advertiser-friendly content, and therefore run more ads. However, decisions are relayed to users without much explanation of what exactly broke guidelines, and there have been cases of rules being bent for higher profile creators (Logan Paul’s suicide forest video). This leads to confusion, and most commonly, to creators uploading multiple versions of a video to YouTube to check to see if their content is still within community guidelines.

For most casual viewers, the algorithm may be the last thing on their mind. Those watching YouTube for the occasional video or to play music likely won’t hear creator perspectives, which shine light on how the YouTube algorithm can push the scales towards content that will provide Google with more revenue. The YouTube content policies aren’t really ever user focused; with a few exceptions (kids content having certain restrictions, ContentID labelling content in videos) — this further amplifies the lack of knowledge. The lack of regulation in comment sections, for example, means that unless a user uploads content, they may not understand what any fuss over the algorithm is about.

First and foremost, allow the algorithm to share what guidelines were broken and when in creator content. A creator could easily change their content to fit guidelines if they were told that a certain word, clip, etc. is what is preventing the content from being treated as normal. The improvement would be measured in a decrease in the number of content appeals and flags by the algorithm, as users learn how to fit within content guidelines.

Secondly, allow for easier appeals on content flagging. The algorithm will get things wrong at points — and being able to quickly appeal the decision and have a manual review take place would prevent dissatisfaction with creators. Like the above, a decrease in the number of flags and content appeals would show that this change is having the desired effect.

--

--

Edwin Chalas
Edwin Chalas

Written by Edwin Chalas

my medium page. check out my website: manband.one

No responses yet