How Is YouTube Content Moderation Done ?
What is YouTube?
“Where have you been?” might well be the answer if the question was “What is YouTube?”
Launched in 2005, at a time when the internet was beginning to make its presence felt around the world and straining at the leash to penetrate to the far corners of the globe, YouTube is the second most frequented website today.
It is a social media platform that lets users create, watch and share videos. It has more than a billion active monthly users. Its users upload over 500 hours of video content every minute, minute after minute, from all corners of the globe. These users are also consumers, and collectively watch over a billion hours of video each day. This means that each human being on the planet watches about 8 minutes of video on YouTube every day. Remember that a video is uploaded by one person but watched by many, hence the difference between the upload and download hours.
Google saw the potential in YouTube fairly early and bought it for $1.65 billion in 2006. With the changes introduced and under Google’s stewardship, YouTube generates $20 billion in revenue annually.
Coming into existence after Google’s takeover of YouTube, oWorkers is a BPO player that specializes in supporting requirements emanating from the digitalization happening around us. Technically advanced, it operates from super secure facilities & protocols for ensuring the security of client data with its ISO certifications (27001:2013 & 9001:2015).
The logic for YouTube content moderation
Social media platforms like YouTube have made content publishers out of all human beings.
In the days of yore, publishing was an activity restricted to a few individuals and organizations. They would source content, establish veracity, create an output for readers and viewers, submit to an editorial process, ensure that the content being pout out was kosher with the rules and regulations in place, and then make it available to the target audiences. Publishers were few and identifiable, and it was generally easy to trace a piece of content back to where it originated. This also kept the publishing community on a leash, in terms of possible repercussions if they stepped out of line.
With the internet and social media, everyone is a publisher. There are 4 billion active users on social media, of which about a billion are on YouTube, not exclusively in most cases. These users, while they consume content, now also have the power to publish it, through a simple process of upload and submit or type and submit. The comments they leave, the opinions they post, the photos they share, the videos they upload, is content being published by them.
Photos, for example, which used to be a personal keepsake, to be shared with close friends and family members, once uploaded on social media, becomes a piece of published content. It can be viewed, commented on and, in most cases, further shared. It becomes a living entity of the world wide web.
This publishing, done by the 4 billion social media users, differs from traditional publishing in that it does not necessarily have a process or checks and balances before it sees the light of day on a platform. A user wishes to upload, she goes ahead and does it, probably without much thought to the impact it might have on others.
If this appears like an introduction to a horror story, let me clarify that most, an overwhelmingly large proportion of the content being uploaded is perfectly acceptable as content that can be viewed and shared further. It meets the content guidelines put out by social media platforms, as well as the unspoken rules of civil engagement in a society.
However, a small percentage is not. The hate speech exhorting followers to violence. The graphic violence perpetrated by a terror group. The demonic rites carried out by a religious cult. Pornographic material. These are examples of such content that should not be put out there for open consumption. Only a small number cross the line, but they do.
oWorkers draws a majority of its workforce from the millennials, many of whom are consumers as well as producers of content themselves, and hence familiar with the context. Its positioning as an employer of choice enables it to draw lines of job applicants, enabling a choice of the most suitable resources for its various projects. This also enables us to hire at speed to fulfil short-term, unplanned spikes in client volume which would, otherwise, cost a lot to clients on account of having to maintain an idle workforce for handling a few days of unplanned surges.
As an open platform, YouTube is not immune to such events. This creates the need for YouTube content moderation. Something akin to the editorial process of the traditional publishing and content creation industry, so that platforms can be kept safe and orderly, where people find it pleasurable to exchange ideas and opinions, and content, without fear.
How does YouTube moderate?
YouTube has had a profound influence on popular culture over the years of its existence. It has also enabled many people to express their creativity through the video content they could upload that could be accessed by millions, and made many millionaires out of this process.
However, as we have seen earlier, moderation has become a requirement driven by a few ‘loose cannons’ who take advantage of and interpret the ‘freedom of speech’ that platforms like YouTube offer, for their own devious and suspect benefit.
How, then, does YouTube moderate?
Setting up publishing guidelines
Like almost all other platforms, YouTube content moderation requires policies to be set out in no uncertain terms. Users of the platform are required to confirm that they agree to abide by the policies of the platform, while signing up for it. In the absence of explicit guidelines, users could contend that they were not aware and get away with murder. Hence, setting up the rules and regulations is generally an important step in taking away that excuse from potential wrongdoers.
An extract from their Community Guidelines:
“YouTube has always had a set of Community Guidelines that outline what type of content isn’t allowed on YouTube. These policies apply to all types of content on our platform, including videos, comments, links and thumbnails. Our Community Guidelines are a key part of our broader suite of policies and are regularly evaluated in consultation with outside experts and YouTube creators to keep pace with emerging challenges.
We enforce these Community Guidelines using a combination of human reviewers and machine learning, and apply them to everyone equally – regardless of the subject or the creator’s background, political viewpoint, position or affiliation.
Our policies aim to make YouTube a safer community while still giving creators the freedom to share a broad range of experiences and perspectives.”
They cover a wide range of subjects, such as:
- Fake engagement
- Impersonation
- Spam, deceptive practices and scams
- Child Safety
- Nudity and Sexual Content
- Suicide and self-injury
- Vulgar language
- Hate speech
- Violent or graphic content
and many others.
They even spell out each in some detail. For example, this is what their vulgar language policy states:
“Some language may not be appropriate for viewers under 18. We may consider the following factors when deciding whether to age-restrict or remove content. Keep in mind that this isn’t a complete list.
- Use of sexually explicit language or narratives
- Use of excessive profanity in your video
- Use of heavy profanity in your video’s title, thumbnail or associated metadata
Here are some examples of content which may be age-restricted:
- A video focused on the use of profanities such as a compilation or clip taken out of context
- A video featuring road rage or sustained rant with heavy profanities
- A video with use of heavy profanities during a physical confrontation or to describe acts of violence”
YouTube content moderation – how is it done?
Rules and regulations can only go so far, and no further. Forget social media, in real life too we have rules and regulations articulated in reasonable detail. Despite that, transgressions take place, houses get broken into, people get murdered and vehicles speed through stop signs.
Why?
Because someone has reached a point where breaking the rule has a greater payoff for the transgressor than abiding by it. While nobody would do a formal break even analysis before committing a crime, the person’s moral, emotional, physical and mental state has perhaps reached a point, where committing the crime just makes more sense than anything else, rules or no rules.
This is why rules and regulations need to be backed up by an enforcement mechanism, without which they will remain an academic exercise; they sound good but nobody really cares about them.
How it goes about managing its processes, like moderating offensive content, is YouTube’s internal business. However, the company is making efforts to articulate its strategy in clear terms and share it with users and others who may be interested.
It has put out a video, what else, along with its community guidelines where two senior functionaries of the company explain the efforts it makes at what we know as moderation.
According to the video, content can be flagged off by any logged in user. As each viewer has a limited perspective, they have created a ‘trusted flagger’ program through which people can more easily understand and flag offensive content. This is generally applied to content that pertains to areas with a higher level of sensitivity such as government, defence and non-profits. Trusted flaggers also have access to training.
The flagged off content is reviewed by, who else, reviewers. Reviewers are a team of experts well-versed in the platform guidelines who are entrusted with the job of taking a decision of the flagged off content. YouTube content moderation experts provide coverage throughout the day, and night, and cover all time zones. They are a multi-lingual group widely distributed across the world, as content can emanate from anywhere, in any language.
oWorkers mirrors the infrastructure by providing support 24×7 across its three global centers. With its avowed policy of multi-cultural and multi-ethnic hiring, it supports work in over 22 of the most common languages of the world.
YouTube understands that much of the identified content was not put out by users with harmful intentions. It is possible they were not able to identify the content as offensive. Hence, the first time content is identified as offensive, a warning may be issued to the user, followed by a ‘strike’ against her if the offense is repeated. If there are three strikes within a 90-day period, the account is blocked. The inadvertence of the offense can be gauged from the statistic that 94% of users who get a first ‘strike’ never get a second. YouTube also makes available an appeal process to ‘struck’ users.
Since 2017, YouTube has increasingly introduced Machine Learning (ML) into the mix. Computer programs are taught to identify offensive content based on examples and drawing connections, including samples of what is not offensive. The ML initiative has enabled moderation efforts to handle much larger volumes, at much greater speed. It has also enabled them to continue their work more or less unimpacted during the Covid-19 induced lockdowns when a lot of staff was not available. Flagged content is reviewed and adjudicated upon, in some cases, when the algorithm has a statistically high level of confidence that the ML-identified content is offensive, it may be removed even without a review. Reviewers keep feeding back into the ML process to make it smarter.
With its close relationships with technology companies, built over many years, oWorkers has access to the most advanced technologies, whether for ML or AI (Artificial Intelligence) or anything else. These technologies are deployed for client work, many of whom are also technology companies. It is not an accident that oWorkers delivers cutting-edge technology solutions to its clients.
Of course, as much of it is a human dependent activity, variations in treatment can happen. There are also some documented exceptions which are termed as EDSA (Educational, Documentary, Scientific and Artistic) content which may be allowed to go through despite not being in line with some of the principles. Snippets of child abuse may be permitted if the perspective is to create education and knowledge and awareness around the subject, for example.
Its efforts appear to be bearing fruit. YouTube has even started measuring the efficacy of its moderation process by calculating the ‘violative view rate,’ the views a video identified as offensive received before being taken down. It says that since the introduction of ML into the mix, this rate has gone down by 70%, testifying to the speed with which ML is helping identify offensive content. It claims that 94% of the content taken down was identified with the help of automated systems, and that a majority of those videos had garnered less than 10 views. It claims to have taken down over 83 million videos since enforcement reports started being released by it over 3 years back.
YouTube content moderation for channels
Like any other social media platform, businesses seek to expand their message and reach through YouTube as well. Again, like other platforms, they need to monitor and moderate the user generated content that gets created on their space.
This is where specialist providers like oWorkers come into the picture and help companies monitor and moderate their YouTube channels and comments.
oWorkers is a specialist, focused on data related BPO services such as content moderation. As one of the top three data related providers, having supported global clients for over 8 years, and led by a management team with over 20 years of hands-on experience in services of the digital age, it is a natural choice as partner for YouTube content moderation.
Many of our clients, especially from the US and Western Europe, consistently report savings of up to 80% when they outsource to us. We offer transparency in pricing, usually giving a choice between dollars per unit of output and dollars per unit of input to a client.
We work with employed staff, and not freelancers and contractors that some of our competitors seem to prefer, to avoid long-term commitments and responsibility. We regularly receive ratings of 4.65, on a scale of 5, and above from past as well as present employees on platforms like Glassdoor.