It is not something that gets printed in the Annual Report of public companies or promoted through paid advertising by private companies. This is one reason social media moderation examples
, especially in terms of names of companies and brands, are difficult to come by. Many times, it is deduced through the process of logical extension and extrapolation; if Company A is doing social media moderation, Company B must be doing it as well. Or, if Brand P has a presence on social media, and they are generating a lot of positive comments, it is only possible if they do social media moderation. Attribution of social media moderation being done by a company, based either on evidence or acknowledgment, may not be easy to do. That being said, it is not that the companies doing moderation themselves or through an outsourced arrangement, are indulging in illegal activities that they need to keep under wraps. Under the commonly understood practice of giving fair and reasonable notice, all platforms and all communities created on a particular social media platform start by publishing the rules of engagement. The generally accepted practice requires participants to sign up to become a part of the platform or community, one of the steps being to agree to abide by the rules put in place by the owner of that space. If they do not like the rules, they are free to stay away and not participate. Having accepted the rules and signed up for participation, it is an expectation that they will abide by what they have signed up for. And, if they step out of line, the site owners would be within their rights to moderate the content in a manner that is conducive to achieving the objectives with which they set up the space and related rules. Social media moderation examples are, perhaps, best understood in the context of the different ways in which moderation can be done. Some claim that not moderating social media is also a decision and hence should be counted as an example of moderation. There is perhaps some merit in the contention, and it may be counted as one is some write-ups, but for our purpose, after having noted it as a possible decision that could be taken by many companies, we will discuss the various types of social media moderation that might be classified as ‘active.’ Counted as one of the top three BPO service providers
in the world in its chosen area of data and back-office, oWorkers is an expert in moderation of social media communities on behalf of clients across industries from all over the world. Clients, especially from the US and Western Europe, note the saving of almost 80% of the cost prior to outsourcing the work to oWorkers.
As perhaps conveyed by the name of this method, in this method, the content is checked before it is released for the public. This could be viewed as the classic method of authorization
before any action is taken. Accuracy and control take precedence over engagement and user experience. User generated content (UGC) is placed in a queue that is fed to the moderating resources who pick off one item at a time and take a view on its suitability for the website. If found suitable, it is released and becomes available to users.
This allows website owners to exercise control over the content. They can ensure that unsuitable content is not available to users.
It degrades user experience. When someone participates on a social media platform, the objective is to facilitate the free flow of communication and ideas. Holding back content created by users stilts the experience and could make for a dull and dead platform if user expectations are different. Social media moderation examples that rely on this method also incur a significant cost as human resources are required in adequate numbers to review each piece of content, in good enough time.
This could be the appropriate method for platforms that are sensitive to abusive content, such as ones directed at young adults, or where undesirable or libelous content could cause damage to the brand or the subject being promoted, such as celebrities. It could also be used where the turnaround time expected is not fast, allowing for review and authorization without upsetting the community. It is understood that many startups offering educational services and content rely on pre-moderation to ensure that offensive or motivated content does not get through. The content on their platform is usually meant for access with or without subscription, and generally there is no immediacy required by the content submitter for it to be published.
The oWorkers advantage
As a preferred employer in all the locations it works in, oWorkers receives a steady stream of walk-in talent. It gets to choose resources based on the requirement of different client projects, including for pre-moderation which often requires resources in large numbers.
In this method, content is not held back for review prior to being published. The participating user gets the satisfaction of seeing her UGC visible as soon as she has pressed the SEND or UPLOAD button. What she might not know is that while the content has become visible, a copy of the content is also placed in a queue for review where a moderator takes a call, as in the pre-moderation process, on whether to permit the content to continue being visible. Thus, in a way, this works in a reverse way as pre-moderation. All content is assumed to be kosher unless found to be otherwise, while it is the other way round in pre-moderation.
This method promotes user engagement by permitting a free flow of communication and exchanges without the need for an authorization in each case.
The human cost continues to be an issue. While content does become visible, the need for a quick review does not go away because it is now visible. Unacceptable content needs to be removed as soon as possible to limit the damage it can do. Of course, the other downside is that with technology in everyone’s hands, inappropriate content, even if it becomes accessible for a short period of time, can be copied and spread very quickly. It may become difficult to limit the damage once that has happened.
It is understood that YouTube relies upon a form of post-moderation to maintain the quality of content available on its platform.
The oWorkers advantage
oWorkers operates with hired staff, and not contractors and freelancers as some of their competitors do, which delivers the best social media moderation examples. They regularly receive scores of 4.65 and above from past and present employees on platforms like Glassdoor. They pay social taxes for their staff in the locations they operate from.
Where the website or community owner does not proactively seek to identify and eliminate offensive content, and instead relies upon feedback from the participating community itself, is known as reactive moderation. Content that is identified and flagged by one or more members of the community would then be reviewed and adjudicated upon. The member/s flagging the content may also need to be kept informed of the fate of the content, and the reasons for whatever action is taken, especially if the content is not acted upon as suggested by the member. Typically, the platform will make available a button that can be clicked if someone wishes to report a particular piece of content.
Among social media moderation examples, this method gives priority to the engagement that happens on the platform, over the need for control. It also accords respect to its members and assumes that it has members who are engaged and mature and will take the initiative to flag content that may be inimical to the interests of the group, without taking undue umbrage because of its presence. Since each piece of content need not be adjudicated upon, the requirement of resources needed for the activity reduces substantially.
Like in any case where pre-moderation is not being done, there is a risk of malicious content making it to the website and being rapidly copied and spread, leading to bad press for the community. Also, the responsibility of publishing standards for action for an item that is flagged, still remains with the owners of the website.
Facebook is understood to use reactive moderation for its platform where it reviews and acts upon content that may be reported by users. This may not be the only method of moderation used by Facebook. It is also known that Facebook might be using different methods of moderations for different types of content on its platform.
The oWorkers advantage
Its ability to attract talent also gives oWorkers the flexibility to provide short-term resources to handle peaks and troughs in client volumes, which can often happen in any form of moderation. oWorkers can hire an additional 100 resources within 48 hours. This enables clients to save big on account of not having to hire and retain resources only to meet peak or unexpectedly high volumes.
This could be considered as a variation of Reactive moderation. The website owners don’t engage in primary moderation, or moderation of their own volition. Instead, the community is set up in a way that members get to provide feedback on each piece of content published, in the form of a rating. Content that gets consistently high ratings is most visible to users, with lower rated content being pushed further and further down as a result of poor ratings. In time, the lowest ranked content could almost become invisible. From the perspective of identifying and removing offensive content on sensitive websites, among social media moderation examples, this is not much used.
This method also gives priority to the engagement that happens on the platform, over the need for control. It also accords respect to its members and assumes that it has members who are engaged and mature and will take the initiative to rate content so that it may find its correct place in the database. As there is no role for a moderator, with content being rated up or down by users, this does not exercise the financials of the community owner.
Like in any case where pre-moderation is not being done, there is a risk of malicious content making it to the website and being rapidly copied and spread, leading to bad press for the community. Without an oversight being maintained by the owner, there is a possibility that content could slide out of control if users do not take active part in managing and rating it.
An example could be Slashdot. News stories submitted by users and editors are evaluated by users and editors. Comments can also be added to each story.
The oWorkers advantage
Operating out of three distinct geographies, with its avowed policy of hiring a multicultural workforce, oWorkers has developed, and now offers to clients, the ability to process in 22 languages. After all, social media interaction can take place from any part of the world in any language.
The story is not complete without inclusion of automated moderation. As volumes have risen and the world becomes more divided and dangerous, automated moderation holds out hope for many platforms and companies. It is hoped it will be able to pre-moderate sensitive content at a cost and in volumes that human beings are not able to. One of the challenges has been that much of the content that is being uploaded is unstructured. It could be photos or audio or video, or even unformatted text. Traditional technologies have been unable to handle that, but the growth of Artificial Intelligence (AI) holds promise that it may be able to do much of the heavy lifting at some point in time which, currently, is limited to applying filters in one form or another.
As volumes rise, automation often becomes a savior for business with the advantages it provides like handling much larger volumes at much lower running costs. In addition, it is expected to apply standards strictly and consistently that humans sometimes fail to.
It has not been able to develop the intuition and fine sensibilities of the human brain. While it is expected to do the majority of the work, when it is unable to take a decision, it is expected that it will defer to its human masters.
Automation based examples may be few at this point, but are expected to become commonplace once technologies become reliable. It could be used across the industry spectrum, with some customisation for each situation in which it is applied.
The oWorkers advantage
Operating from secure facilities, oWorkers is ISO (27001:2013 & 9001:2015) certified and GDPR compliant. Its time-tested partnerships with technology companies provides it access to the latest processing technologies for client work.
With all three sites being capable of running 24×7 operations, not only can they provide quick turnaround, but also offer business continuity by splitting volume across two or more locations. 85% of their clients are technology companies, including some unicorn marketplaces. They are led by a team that has over 20 years of hands-on experience in the industry. They will have several social media moderation examples to share with prospective clients such as yourselves.