The Case for Automated Content Moderation

The Case for Automated Content Moderation

Let’s go step by step.

What is moderation?

Moderation, as commonly understood, is the process of guiding a discussion or movement away from extremes. Exercising moderation requires one to be within reasonable limits, as defined by law or by societal norms. It is the act of being calm or measured.

What is content?

While the English word ‘content’ has many meanings, in the context in which we are using the word, in the media sense, in the publishing and communication sense, content is “the information and experiences that are directed toward an end user. Content is “something that is to be expressed through some medium, as speech, writing or any of various arts. Content can be delivered via many different media including the Internet, cinema, television, radio, smartphones, audio CDs, books, e-books, magazines, and live events, such as speeches, conferences, and stage performances.”

With the internet penetrating to the deepest corners of mankind, content is being created at a mind-boggling rate. Every time we comment on a post, we create content. Each review we post of a new product, we create content. Each image we share of a vacation, we create content. Now multiply that with the several billion people doing similar things, and you can perhaps get an idea of the scale of content being created, and consumed, today.

This vast and distributed content creating machinery that the world has become, thanks to the internet and social media, has a free hand with what they can create. While in their private lives they may be free to do what they please, within the confines of civil liberties and local regulations, when their content becomes available widely, it needs to adhere to the basic human and societal guidelines, written or not. It also needs to adhere to the defined, usually written, content rules of the platform they are creating their content on.

This gives rise to the need for content moderation.

The practice of monitoring and determining suitability of user generated content (UGC) to be visible to all comers, is known as content moderation. It is generally performed against pre-defined and generally accepted rules and guidelines.

oWorkers has been moderating content generated on web properties owned and maintained by its clients for over 7 years. As traffic has increased, so has our capability in ensuring that communities adhere to the objectives with which they were created. We have been selected as one of the top three global providers in the space of data services on multiple occasions. We focus relentlessly on our chosen areas of work in data based BPO services.  

Before we move to automated content moderation, let us take a quick look at the different ways in which moderation can be done.


Methods of moderating content

Assuming content moderation to be a given, something that needs to be done, it can be done manually or it can be automated. Of course, a combination of different techniques is always a choice that can be made. And, if one were to assume that moderation is not a given, that could operate at the top level as a third method; no moderation.

Manual moderation

Before any process is automated, for reasons of efficiency, volume, standardization, or any other, it is done manually. Manual moderation can be further subdivided, usually based on the stage at which the exercise is being carried out:

  • Pre-moderation – This works like an approval queue, where content is published only after a moderator has been able to review it and stamp her approval on it. It can also be called pre-publish moderation. Though it has its limitations like content visibility being delayed and stifling of open communication, for sensitive websites and subjects, this may be suitable.
  • Post-moderation – As the name might suggest, publishing of the content is permitted, and subsequently reviewed and removed, if found objectionable. It can also be called post-publish moderation. While it permits open communication and healthy sharing, damage could be caused by unsavory content.
  • Reactive moderation – Reliance is placed on the community visiting and participating in that online forum to flag content that is out of place in that setting. This can work where members are deeply invested in the community and are keen to ensure its success. Otherwise, this might be better used as an additional check.
  • Distributed moderation – This relies on a rating mechanism where members rate content on the basis of its relevance. Content that is rated low gets pushed down in relevance and eventually almost vanishes from view. Like reactive moderation, business-run communities are generally reluctant to leave moderation open to the whims of participating members and hence are used, if at all, in addition to business-guided moderation.

oWorkers has access to a continuous supply of the best talent, being a preferred employer in the communities we operate in, thanks to our active participation with them. With a committed training team taking over the task of training the hired resources, they are released to the delivery team ready to ‘hit the ground running.’ Our ability to attract talent keeps our hiring cost low, and also enables us to cater to short ramps, if client volumes face any seasonal or other ramp. This, in turn, saves a lot of needless cost for clients. We can hire an additional 100 people in 48 hours.

Automated content moderation

As business and transactions grow, some elements of automation become necessary. Content moderation is no different. With the rise and rise in the volume of transactions and content on the internet, automated moderation is gradually becoming a necessity and less of a choice.

That being said, automation can mean different things to different people. Also, unique automation tools and processes can be implemented for each company or platform using it. Some of the more common types of automations are:

  • Filters – Moderation through creating filters is simple and quick and does not require too much technology. Acceptable, or unacceptable words and phrases can be created and the tool will faithfully apply them as required. Human beings can continue to review the outcomes and update the filter lists.
  • Blocking IP addresses – Users identified as abusive can be blocked from further interaction with the platform. Of course, users, if intent on creating malicious content on a platform, can continue to acquire new IP addresses and IDs to get through this block.
  • Natural Language Processing (NLP) – The human brain processes information and is able to identify context, which a machine is unable to, or able to only to the extent it has been programmed to. With the help of NLP techniques, software applications are able to better identify context, patterns of conversation, relationships, etc. and take actions accordingly, giving an edge to automated content moderation.
  • Artificial Intelligence (AI) – AI has been in the making for several years and is trotted out to be the moment when machines learnt to think and behave like humans. While it may make processing tasks and transactions easier and faster, in reality we are far away from such a time. However, AI tools are rapidly expanding the remit of the moderation that tools could handle. From handling textual content based on filters and NLP tools, automation is now able to review and make sense of images, audio and video content as well.

While not perfect, automation is able to handle a large part of the content that can be easily understood and safe to handle, which could be 90% of the volume, and leave the balance 10% in the good hands of their human masters. This also helps in avoiding embarrassment for the brand if competitors crack the automation algorithm and post content critical of the owner while lauding their (competitor’s) brand.

Being GDPR compliant and ISO (27001:2013 & 9001:2015) certified is the starting point for oWorkers. We operate from secure facilities and have been one of the first BPOs to create infrastructure enabling staff to work from home in a secure environment, given the constraints placed by the Covid-19 epidemic.

Our enduring partnerships with technology providers around the world ensures that we have access to the latest tools for our requirements. Clients also benefit as eventually these tools are used for processing client transactions.


Limitations of automated content moderation

Automation releases many benefits to mankind, which is perhaps the primary reason man continues to strive for it where he sees an opportunity.

Blind automation, however, can lead to greater harm than good. Automation is not an unmixed blessing. It has its limitations. In order that it is used to our greatest advantage, it is important that we recognize these limitations in any automation effort in order that they do not detract from the exercise and its outcomes.

Needs to be kept updated

Software tools do not have an inbuilt update system where they keep ingesting environmental events and updating the algorithm to stay current. Though many AI models claim to be able to update themselves automatically.

Even on a particular platform or in a community, the language and discussion topics might keep changing. A set of filters created when the discussion was around types of alcohol may not be relevant if the discussion has moved on to the issue of alcoholism in juveniles.

Hence, one cannot implement an automated content moderation system and forget about it. It needs to be constantly monitored and kept updated.

No awareness of context

Not being blessed with the human mind, machines do not have an awareness of context, or have awareness of context to the extent they have been made aware. Heart shaped emojis may be appreciated on one platform while being considered offensive on another. The same differences could exist from one geographical region to another as well.

If an image of a female breast has been classified as nudity with the required action being to eliminate the piece of content, when a female breast appears in the context of feeding a baby, that is also likely to be treated with in the same manner and removed, till our models reach a high level of sophistication.

They align with major behavior patterns, not with unusual ones

Acceptable content is generally similar while unacceptable content is dissimilar in their own unique ways.

When datasets are used to train AI models for moderation, there are many examples available of acceptable content, since most content is acceptable, while there are far fewer examples of unacceptable content, since the outliers are few and far between. As a result, the model will be equipped to handle the acceptable content, which was never an issue to begin with, while being less well equipped to handle unacceptable content, for which the entire edifice of moderation has been erected.

This is perhaps the reason many tools end up throwing content into a queue for human review.

Creator and dataset bias

Al models for automated content moderation, as with other AI models, are likely to be trained by a group of individuals who will have their own biases and prejudices, perhaps like all human beings. These biases will creep into the training they impart to the AI model and will forever be a part of the decision-making of the model.

Bias for text

Computers have been brought up to understand text that is formatted in a manner they can understand. This is referred to as code or software. Each character being a well-defined unit carries a defined meaning for the computer, either individually or in a defined sequence and pattern along with other characters. Its ability to understand other pieces of content like audio, images and videos, is limited. An image, for example, is just a random collection of dots or pixels. It is unstructured data. Of course, Ai has made progress in getting machines to recognize and understand unstructured data, but it requires a lot of effort and large training datasets to create a somewhat usable understanding.

With several unicorn marketplaces as longtime clients, oWorkers understand the challenges of this work and is equipped to handle them. With centers in three of the most sought-after delivery locations in the world, oWorkers employs a multicultural team which enables it to offer services in 22 languages. Operating with employed staff, as opposed to contractors and freelancers employed by many competitors, we regularly monitor each individual’s performance as part of a larger career management framework and take steps like training programs and job rotation as and when needed.

Our leadership team comes with hands-on experience of over 20 years in the industry and leads all client discussions and engagements while overseeing the delivery.


Automation is a compelling proposition

While acknowledging the limitations, which every automated system will experience, we must acknowledge the contribution of automated content moderation not only in handling larger and larger volumes with greater and greater speed, but also in mitigating the psychological impact of reviewing damaging content on the human moderators. Though employers trot out homilies about the great work environment and psychiatric support they make available to staff on this job, the truth is that constant exposure to content that could be of graphic violence or hateful or sexually abusive or racist could leave scars on the psyche that may be difficult to recognize and handle, despite the best psychiatric support.

Recommended Posts