What is content moderation?

What is content moderation?

We may not realize it, but each time we interact with any of the social media platforms, we are either consuming content that someone else has created or creating content that we would like others to consume. A review I wrote of the last book I read, photographs of my niece’s wedding last week, an update on travelling to the Maldives, are different forms of content I am creating that I want to share with others, get them to read/ view it and interact with me.

Similarly, many other people I know are doing the same, with the same intent of getting others, like me, to consume and interact with them. In this case they are the producers and I am the consumer. Many platforms have also evolved as tools for disseminating information. It is common for authorities to keep their Twitter handle updated so that people can get accurate information on what is going on, especially in times of a crisis or emergency. During the Covid-19 pandemic, social platforms have widely been used by people to spread information on availability of beds in hospitals, drugs, etc.

Of course, social media platforms, while they have the potential for good, can also be abused, like anything else. They are often leveraged for spreading malicious information about communities and groups. They are used for spreading rumors and lies, to the extent of threatening the law and order situation.

 

Social media for business

An unimaginable amount of data is being created every moment that is being shared and consumed over social media platforms.

As consumers are spending so much time on social media platforms, it creates a natural interest for organizations who have always been seeking ways and means of reaching their target populations in the most effective manner. Hence, if consumers, of all kinds, are already present here, can businesses be far behind?

Social media platforms provide an opportunity for companies to interact with consumers in a most close-to natural setting as might be possible. They leverage these platforms to create awareness about their products and services, in other words promoting their products and services. At the same time, they can keep getting information on consumer tastes and preferences, a kind of a market research and survey in a natural setting, as opposed to the artificial setting of a survey form being filled.

On account of being digital platforms which require creation of a profile or account by users, the consumer demographic information is available to these platforms at a level of detail that was hitherto not possible. Advertising in tabloids and newspapers and billboards was like a scatter-gun approach; you spray a lot of bullets around in the hope that a few will find the target and hit them. In the case of social platforms, for a fee, the platform will make available demographic data to these companies who will then be able to drill down to the specific segment of people they wish to reach, without wasting their message on others. Thus, there is a greater ‘bang for the buck’ that is available to corporations.

 

The case for content moderation

Organizations are busy leveraging social media platforms to further their interests. They are busy creating communities and groups in a bid to bolster their presence and appeal and brand recall.

At the same time, the raw platform exists, for all users who may choose to sign up and create an account on it. They could choose to engage with their own groups and communities or the world at large. They could choose to become a part of the communities created and sponsored by other users, like the organizations we referred to, or create groups and communities themselves. Whichever their method of engagement, a wide choice of both consumption and production of content remains available to all users.

Taking the scattergun analogy a little further, we need to watch out for the ‘loose cannons’ amongst social media platform users. By and large every individual is a responsible, caring being. However, there could be some who are not. Without delving into the reasons that make them so, we know that crime is a reality, murders do take place, rapes and thefts happen. On the ‘civil code’ side of the divide, contracts get violated, leading to litigation and court cases.

Social media is yet another platform that is subject to all the variety and vagaries of the human mind and human psyche. For many, it is that safe corner where I can be me, away from the prying eyes of the world. I do not have a social setting where I am being watched or judged, at least immediately. This ‘safety’ of the world wide web can be toxic and heady and could lead people to create content that is not kosher for the rules that civil society has defined for itself. There could be graphic content posted that is poison for the young minds that throng to these platforms. There could be hateful content that could have the potential of inciting people against people and communities against other communities. Content could also be illegal.

In order that these platforms do not degenerate into a free-for-all or anything-goes, they need to be moderated, or watched over. This is the responsibility we have to ourselves and to civil society. This is what is commonly known as content moderation. It is a practice followed by all social media platforms that position themselves as open platforms.

oWorkers has established itself as a premier data services BPO. One of our key offerings is social media content moderation. We have supported global clients to manage increasing volume of content by deploying tools along with trained human resources to manage the activity. We have been identified as one of the top three data services BPO providers in the world.

With a presence spanning three geographies and ability to deliver services in over 22 languages, oWorkers is a one-stop shop for many of our clients.

 

How it works

In simple terms, it is the practice of monitoring content and moderating it, where required. Moderation can take two basic forms; the content can be modified, or it might be disapproved for display (or deleted). In other words, it will not be available to other users of the platform. Of course, each platform and community owner would have many different ways of executing these two actions.

Setting expectations is perhaps the logical place to start when one wishes to implement rules. While platform owners may have Terms and Conditions that they ask users wishing to use the platform to sign, organizations who leverage the platforms for furthering their organizational interests by developing, nurturing and supporting vibrant communities round their products and services, may also wish to lay down the Terms and Conditions, or Guidelines, for the users and visitors of their platform. That being done, it makes their task of taking actions like deletion and modification justified and simple. At least one will not get a “you never told me,” objection.

oWorkers runs a battery of tests before we hire people for this activity. As can be imagined, reviewing content that can be disturbing, has the potential of leaving emotional scars on the people reviewing it. oWorkers being a preferred employer in the regions we operate in helps in attracting talent and giving us choices. The hired resources are provided with training before deployment on client engagements.

Our access to a deep pool of resources also enables us to cater to peaks and troughs in volumes, which can be a costly exercise for clients.

 

Techniques of content moderation

You can choose from a variety of moderation methods. Of course, one will need to take into account the profile of users, purpose of the community, time sensitivity of content and other factors before deciding on one, or a combination of more than one, methods. The common ones are:

No moderation

This is also a choice, sometimes forced by circumstances. You may consider your community to be small, or made up of homogeneous, uniformly disposed people and hence decide upon no moderation. Or, it could be a principled stand that you take deciding upon no moderation. In any way you arrive at the decision, this is also a moderation method in a manner of speaking.

Pre-moderation

Content is screened by a moderator before it becomes visible or available on the platform. It becomes visible only if the moderator decides that it can be made visible.

The advantage of using this method is obvious. It gives the owner of the space the highest level of control. You can control exactly what you would like there and eliminate what you don’t.

On the flip side, this creates the greatest lag in making the content available. There could be peaks and troughs in volumes of course but any manual activity will consume some time, creating a possibility of backlog and delay. Users who are creating content may get put off by these delays and may look at the site as one that ‘manages’ interactions instead of letting them flow freely.

Besides, of course, it will be expensive as it uses human beings.

Post-moderation

The difference in this method of content moderation is that it allows the content to be published without passing through a checkpoint. The content is verified after it has already been published.

The big advantage this method has is that it satisfies users who like to see their content visible immediately. It will also make the community or platform appear to be open, permitting all content to be published.

On the flip side, it could permit content that needs to be deleted, to also be visible for some time. While it will be deleted, some viewers may have seen it, reacted to it, copied it and shared it further.

Distributed moderation

In this method you leverage your users, almost entirely, for the content moderation that needs to be done. The community is set up in a manner that users’ interaction with the content leads to its promotion and demotion on the platform. Thus, unpopular content or content that is voted down, will gradually cease to be visible to new visitors; unless they make the effort to scroll to the ends of the page/ community for the sake of finding that content.

This could be an effective method where the community is homogeneous and aligned on their views on appropriate and inappropriate not only within themselves but with the owners of the community as well.

Reactive moderation

This method assumes that all content is good, unless a consumer points out to the contrary. The site makes available reporting tools allowing offended users to highlight such content.

It is extremely cost-effective as far as the requirement of people for moderation is concerned. However, there is very little control on content. You are relying on someone to be offended enough to raise an objection so that you come to know of it, review it and take a call.

There is really no assurance that all content that violates policy, or is illegal or is hateful and hurtful, can be removed through this mechanism. Consumers also need to be aware and motivated to report.

Automated moderation

Then there are automated solutions, mainly in the form of Artificial Intelligence (AI). AI and Machine Learning (ML) have gradually been gathering pace. While humans have almost perfected the art of getting machines to understand structured text, what we know as software code, doing the same with unstructured text has been a challenge. With AI, that frontier is also being crossed now.

Training data sets are being created that mimic human behavior and separate the acceptable from the non-acceptable ones, based on which AI models are implemented.

These tools have the advantage of speed and coverage. They can scan an input almost as soon as it has been created. In addition, they don’t miss anything. However, a human watch is still needed to tackle the issues beyond their ken, as well as ensure that their actions are correct.

With its wide-ranging partnerships with technology companies, oWorkers is able to access the latest technologies relevant for moderation of content. These technologies are also deployed for client projects. Added to our GDPR compliance and ISO (27001:2013 & 9001:2015) certifications, gives us the width of capabilities to handle any moderation job for our clients.

 

Content moderation through an outsourced partner

The need has been established.

Will you do it inhouse? Or would you like to hire a specialist partner like oWorkers to execute on your behalf, like many other organizations have done?

The advantages of outsourcing are obvious:

  • You can focus on your core tasks
  • You get the knowledge and skills of an experienced specialist partner; otherwise you will have to build that inhouse
  • With its hiring and training ability, a partner like oWorkers can hire and attract the right talent for the best price
  • It is a more cost-effective solution

Of course, the choice is yours.

Recommended Posts