Why is Facebook Content Moderation important?

Why is Facebook Content Moderation important?

Many voices today seem to suggest that the supremacy of Facebook among social media platforms has been challenged by many upstarts with their formats that appeal to the younger people and first-time users.

According to the website Datareportal, as of July 2021, there are over 4.48 billion registered users of social media platforms in the world with as many as 520 million signing up in the seven months (till July) of this year. More than half the global population uses social media one way or another, and a much larger proportion, close to 70%, if we consider ‘eligible users’ which will eliminate some demographics like children from the denominator, who cannot have social media accounts.

Facebook has 2.8 billion registered monthly active users. YouTube has 2.3 billion and WhatsApp, now a part of Facebook, has 2 billion. Instagram has 1.4 billion.

In a free market, the emergence of competition is inevitable. Each new player will make an effort at creating its own niche and try to address a segment whose needs, it believes, are not met by the existing products. Different platforms have been successful, to varying degrees, in challenging the supremacy of Facebook. However, as is evident from the data, Facebook remains numero uno in the social media space, and even more dominant if data for its group products like WhatsApp and Messenger are included which, on their own, are also easily in the top 10.

And that has been the situation ever since its founder Mark Zuckerberg, the founder of Facebook, sat down in his Harvard dorm to write the code for creating it. Some of us have been exposed to the early days of Facebook thanks to The Social Network, a Hollywood movie based on the founding early days of Facebook and the controversies. Of course, it is a work of art, with creative license, hence to be taken with a pinch of salt.

Starting life at a time social media had started to fire the world’s imagination, oWorkers is steeped in the knowledge of the digital world. It has chosen to specialize in data related service offerings, including those related to social media. In its brief existence of 8 years, it is already counted as one of the top three providers of data based BPO services in the world.

 

Need for Facebook content moderation

As many users, as many opinions.

While from the business perspective its reach has perhaps exceeded the wildest imagination of its founders, who had, it is believed, intended to create a platform through which students, in Harvard as well as other universities, could communicate, its eventual popularity extending to a third of the global population, would no doubt have been a pleasant surprise. The popularity of Facebook has created huge revenue opportunities for the platform; for advertising, for data, for businesses looking to reach out to target segments for their products and services. The ways of monetizing the opportunity have continued to expand.

But blessings are never unmixed.

On the one hand, the openness of social media platforms, and the ability of users to access and create information, feeds into the modern-day narrative of freedom of speech and lowering of censorship barriers.

On the other hand, the same openness appears to be an invitation to some people to create content that might be considered to be vile and offensive by many others.

Like what?

Like hate speeches denigrating followers of a community or faith or group and exhorting people to violence.

Like images of graphic violence posted by an adherent of a terrorist organization.

Like pornographic videos.

And much more.

Social media platforms, eventually, are a mirror of real life. Just like in real life there is a small percentage of criminals who need to be managed, the percentage of creators of offensive content is pretty small, but since it is an open platform, efforts need to be made to ensure it does not reach the target audience and result in the harmful consequences it aims to create. This creates the need for Facebook content moderation.

Our global clients find the oWorkers pricing mechanism to be transparent and attractive, enabling them to save up to 80% of their original costs. This is especially true for clients from Western Europe and the US. Clients also appreciate the choice they get, of a price based on the input, like manhours, or a price based on the output produced.

 

Facebook content moderation – setting standards

In the interest of transparency, setting expectations is important. When we call someone out, it is done while weighing the called-out action or comment against an ‘expectation.’ That expectation, in day-to-day life could be borne out of commonly understood and accepted practices of human behavior that, even if not defined in letter, are generally well understood by most people. Of course, a more transparent form is to set the expectations, or guidelines, in letter, so that the room for ambiguity and interpretation can be reduced.

Organizations make an effort to articulate the guidelines that define expected behavior while interacting with the company on their property, like a website or a community page. Facebook is no different. In fact, social media being a business which makes it useful for a wide variety of people, or users, to join, setting out standards and expectations is particularly important as it provides a baseline against which participation could be evaluated. Moreover, users of the platform have to agree to abide by the rules and regulations that have been set out before they can begin participating.

‘Facebook Community Standards’ define content that is acceptable and that is unacceptable on the platform and can be viewed on the Facebook website. An extract from the page:

“The goal of our Community Standards is to create a place for expression and give people a voice. The Facebook company wants people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. In some cases, we allow content—which would otherwise go against our standards—if it’s newsworthy and in the public interest. We do this only after weighing the public interest value against the risk of harm, and we look to international human rights standards to make these judgments.

Our commitment to expression is paramount, but we recognize the internet creates new and increased opportunities for abuse. For these reasons, when we limit expression, we do it in service of one or more of the following values:

AUTHENTICITY

We want to make sure the content people see on Facebook is authentic. We believe that authenticity creates a better environment for sharing, and that’s why we don’t want people using Facebook to misrepresent who they are or what they’re doing.

SAFETY

We’re committed to making Facebook a safe place. Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.

PRIVACY

We’re committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, choose how and when to share on Facebook and connect more easily.

DIGNITY

We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others.”

Community standards for Facebook content moderation “apply to everyone, all around the world, and to all types of content.” It is divided into sections for ease of reference. A few prominent ones are:

Violence and Criminal Behavior

This includes:

“Violence and Incitement

Dangerous Individuals and Organizations

Coordinating Harm and Publicizing Crime

Regulated Goods

Fraud and Deception”

Safety

This includes:

“Suicide and Self-Injury

Child Sexual Exploitation

Abuse and Nudity

Sexual Exploitation of Adults

Bullying and Harassment

Human Exploitation

Privacy Violations

Image Privacy Rights”

Objectionable Content

This includes:

“Hate Speech

Violent and Graphic Content

Adult Nudity and Sexual Activity

Sexual Solicitation”

oWorkers, with three strategically located global delivery centers, that can individually operate on a 24×7 basis should there be a client requirement for the same, are well equipped to provide business contingency to clients, by splitting volume across centers, with a common front to the client.

 

Enforcement of Facebook content moderation guidelines

A law without a mechanism for enforcement is usually considered to be toothless.

Facebook employs a combination of technology and people to enforce its guidelines in a two-step process of ‘detection’ and ‘taking action.’

Detection

Technology is playing an increasingly important role in detecting violations before they are reported or even viewed. This is in line with global trends. While human beings are better at evaluation of content and can understand the context and fine nuances much better than a machine, they have limitations in terms of capacity, and also cost money on an ongoing basis. With the huge amount of content being uploaded every second, it is always a losing battle for detecting offensive content.

Technology, on the other hand, can process and review millions of pieces concurrently. And, once developed, the running costs are fairly low. With technologies like Artificial Intelligence (AI) gaining ground, there is increasing reliance on technology for flagging off potentially harmful content.

Technology even helps reviewers prioritize content.

With the relationship that they have built with technology providers, oWorkers today has access to the latest in technology. This benefits their clients as the technologies are eventually used for work on their projects. The super secure facilities they operate from and their ISO (27001:2013 & 9001:2015) certifications, provide additional comfort to clients. They are also GDPR compliant.

Taking action

Technology works in concert with teams of reviewers to obstruct or delete offensive content. Facebook says, “Our technology proactively detects and removes the vast majority of violating content before anyone reports it. Engineers, data scientists and review teams work together to update and improve this technology over time. Meanwhile, our technology helps review teams prioritize content…Most of this happens automatically, with technology working behind the scenes to remove violating content—often before anyone sees it. Other times, our technology will detect potentially violating content but send it to review teams to check and take action on it.”

With their unique positioning as preferred employers, oWorkers attracts a steady stream of walk-in jobseekers. Not only does this reduce their hiring costs, reflecting in the attractive pricing they are able to offer, it also gives them a choice of talent for various projects, including content moderation. The steady stream also gives them the ability to hire for short-term peaks in demand, up to 100 additional resources within 48 hours.

Reviewer teams for Facebook content moderation are available across the world and provide 24×7 coverage and are capable of handling content in more than 50 languages. After all, offensive content is not the preserve of a particular language or culture or geography.

Having actively practised employing a multi-ethnic and multi-cultural team in all their offices, oWorkers has got the benefit of a multi-lingual support capability as a by-product. They are able to provide support in 22 of the most common global languages.

“As potential content violations get routed to review teams, each reviewer is assigned a queue of posts to individually evaluate. Sometimes, this review means simply looking at a post to determine whether it goes against our policies, such as an image containing adult nudity, in instances when our technology didn’t detect it first.

In other cases, context is key. For example, our technology might be unsure whether a post contains bullying, a policy area that requires extra context and nuance because it often reflects the nature of personal relationships. In this case, we’ll send the post to review teams that have the right subject matter and language expertise for further review. If necessary, they can also escalate it to subject matter experts on the Global Operations or Content Policy teams.”

Facebook has a number of tools in its arsenal as actions that can be initiated in the event of a violation. These typically include strikes, with each additional strike reducing the user’s privileges, placing restrictions on accounts, disablement of accounts, restriction on accounts of public figures during periods of civil unrest and removing pages and groups.

 

Supporting Facebook content moderation

oWorkers believes in working with employed staff, as opposed to some of our competitors, who prefer contractors and freelancers. Though it might mean carrying additional cost at times, it provides flexibility in deployment of resources while providing an experienced middle management layer in the organization. The team is led by a management team with over 20 years of hands-on experience in the industry.

We work with many less privileged communities. The work they do for us becomes a ticket for an entry into the global digital economy for them. The work you outsource to us will enable us to usher a few more youngsters from these communities into the global digital economy.

Recommended Posts