The Dangers of Crowdsourcing Content Moderation

Crowdsourcing has been around for hundreds of years, but the actual term crowdsourcing was coined by Jeff Howe in 2006, who referred to it as “the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call.” Since then the field of crowdsourcing has grown significantly.

Q2 2020 hedge fund letters, conferences and more

With the rise of the internet, social media, and user-generated content, the need for content moderation has never been greater. And neither have the challenges that moderation presents.

Many organizations are taken in by the lure of crowdsourcing moderation from groups of individuals offering irresistibly low prices. Other companies have less than ideal experiences with artificial intelligence (AI)-based moderation and turn to crowdsourced moderation out of frustration. While crowdsourcing can be excellent for very specific tasks, it is not an effective approach to content moderation.

IMAGE: Crowd-by Gerd Altmann-Pixabay

While AI moderation has its limits and is occasionally blamed for filtering harmless content, crowdsourcing for human moderation is not the solution you should be turning to if you want to protect your audience and your brand.

Today, we’ll look at the benefits of using an actual human for image, text, and photo moderation, the risks associated with implementing crowdsourced moderation, and the most effective means of moderating content.

Let’s get started.

The Benefits of Using Human Content Moderation

Automated moderation decisions are immediate, making this an appealing moderation option. But while AI can lighten the load for humans, it still struggles to distinguish between contextual nuances and legitimately offensive content. As such, AI has been known to occasionally reject harmless content or even miss offensive submissions.

Artificial intelligence has its shortcomings, so dependence exclusively on AI for content moderation may lead to the rejection of appropriate user-generated content that your creators have worked hard to produce. Simply put, UGC moderation on the web necessitates human efforts as well.

Human moderators that are highly trained to catch violations that fall into gray areas can use their expertise to make final content decisions that align with your brand standards. And you’re not likely to find these human teams on a crowd-based talent platform.

The Dangers Associated with Crowdsourcing Moderation

So your company has decided to use a human-based moderation solution, at least in part. And you’re considering employing a crowdsourced solution to moderate UGC on your website or app.

Crowdsourcing moderation, the act of sourcing labor from a large online group of moderators, has been associated with several dangers that may threaten your organization’s reputation, as well as expose your audience to harmful content. The following are some of the most common risks that your brand may encounter by crowdsourcing:

Anonymous moderators and lack of accountability

Crowdsourced moderators are usually anonymous, frequently using ambiguous usernames. Since little is known about their identity, a crowdsourced moderator often has little to no accountability for the labor they are tasked with. This can lead to moderators lacking motivation to be thorough in their reviews, approving or rejecting images at will.

For example, a crowdsourced moderator may rush through the content before them in an effort to quickly clear the image moderation tasks from his or her work queue. In the process, they can easily report an acceptable image as offensive or let a harmful post slip through the cracks.

Failure to involve brand experts in moderation

It’s virtually impossible for crowdsourced workers to moderate according to your brand’s standards, as they are not familiar with your distinct brand criteria.

As a result, a crowdsourced moderator is not prepared to separate off-brand or offensive content from content that is acceptable. Neither will a moderator that is completely unfamiliar with your brand possess enough experience to know what content will best represent your brand’s voice and values.

A crowdsourced laborer may also have their own standards for what is sexually-explicit or profane. If you leave moderation up to their interpretation of “profane” or “explicit” then you may suffer brand-damaging or even legal consequences.

No guarantee of unbiased moderation

The crowdsourced moderator may not have a clear understanding of your moderation criteria. Even if they do, there is no guarantee that the moderator will be unbiased. In the crowdsourcing arena, there are individuals who may intentionally accept or reject content that is in line with their personal perspective, which may or may not align with your brand’s standards.

It is also not unheard of for competitors to pose as crowdsourced moderators in order to access another brand’s website or app. And the ambiguity of crowdsourcing would make this hard to detect until it’s too late. Aside from brand competitors, crowdsourcing entrusts individuals who may not have your best interests in mind with confidential information or mission-critical tasks.

Lack of content moderation training

Crowdsourced moderators are typically not specialists. Usually, they are workers who take on varying crowdsourced tasks, one of which may be the crucial job of approving or rejecting content uploaded by your users. With their lack of training, these moderators cannot be expected to properly enforce your brand’s specific criteria.

Their ability to differentiate bad content from good is primarily conceptual, not technical. By tasking crowdsourced laborers with moderation projects, you are enabling these individuals to apply their personal preferences and values to moderation decisions- Individuals who may not be familiar with the technical aspects of UGC moderation.

Privacy violations and image theft

There’s little stopping crowdsourced moderators from stealing your brand’s images for their own use or for distribution on other online platforms. Stolen UGC images can be uploaded to other websites, sent to individuals, or posted on social media.

Unfortunately, once your images are shared publicly, stopping their distribution is improbable. And it can also be a huge violation of your users’ privacy, resulting in irreversible damage. Privacy violations can not only damage your company’s reputation, but also lead to legal consequences.

Failure to catch false positives or negatives

If your business has decided to work with human moderators due to accounts of AI-based technology incorrectly flagging acceptable posts as offensive (false positives) or failing to catch untoward content (false negatives), then you’ll be disappointed to learn that false negatives and false positives occur at an alarming rate when amateur moderators are assigned the task of rejecting or approving UGC content.

The untrained laborers that make up most of the crowdsourcing workforce seldom have a background in the subject matter that they’re recruited to moderate, so it should come as no surprise that false negatives and false positives occur. In general, the results of crowdsourcing moderation are inconsistent, unreliable, and low-quality.

The Most Effective Means of Moderating Content

Many businesses rely on photos, videos, and text submitted by users to generate engagement, and are eager to do so in a way that is cost-effective. But asking crowdsourced moderators who are often anonymous, untrained, and unaccountable to protect your brand in an effort to save a buck can actually backfire.

Here’s why: In order to address quality issues, companies often submit each image to multiple crowdsourced moderators and take action based on their consensus. As a result, crowdsourcing this detailed work can actually cost more than partnering with a professional moderation company. Content moderation is not something to cut corners on, since the financial impact of unacceptable content surfacing on your site can be significant.

Fortunately, there are affordable turnkey services and comprehensive approaches for custom moderation projects available to companies that are seeking a budget-friendly moderation solution. And these solutions offer the most effective means of content moderation: A combination of trained professionals and artificial intelligence moderating UGC in real-time.

To avoid the dangers associated with crowdsourcing, look for a content moderation partner that has a thorough training and quality control program for their in house moderation team, as well as advanced AI capabilities for moderating videos and images. In addition, they should offer a robust text moderation solution, including an automated profanity filter service accompanied by more complex text analysis tools designed to flag threats, abuse, criminal activity, and more.

Before your content moderation partner’s live teams review each submission, you will first leverage their powerful AI solutions to remove any content that is obviously objectionable such as images or videos containing nudity, hate speech, offensive gestures, weapons, and more, ensuring submissions are checked in near real-time. An effective AI service should provide a score alerting you to the likelihood of content qualifying for each offense. You can then determine the action you wish to take based on that score.

For example, if an image returns 90% nudity, then you would likely not allow it. If it scored 50% then you would want it to be further evaluated by a human team. These thresholds are adjustable and you may make changes as you become more familiar with the AI’s results.

After AI review, the next step is for your moderation partner’s professional human team to assess remaining images, videos, or text submissions based on your app or site’s very specific predefined policies and nuanced guidelines. Your partner should offer proper training and supervision of their teams to ensure quality results. It is recommended to have regular calls with your moderation partner to address any new threats or borderline content that needs further discussion or requires an update to the rules.

With these two very powerful solutions of AI and professional content moderation teams working in tandem, you now have the proper lines of defense to effectively protect and support your brand’s mission.

The Prioritization of Content Moderators’ Mental Health

Human moderators spend the majority of their workday scanning content to ensure that it aligns with your brand’s mission. In the process, they may be exposed to content that can be upsetting. This is a leading reason that many businesses choose to partner with professional moderation companies rather than perform moderation in house or crowdsource moderation.

The mental health of moderators should be a consideration during the process of searching for a content moderation partner. Select a company that prioritizes their moderation team’s mental health, in addition to their overall working conditions. The best professional moderation agencies offer a comprehensive mental health program to anyone who will be moderating your platform’s content.

To be certain that a qualified partner is chosen, WebPurify suggests posing this series of questions to any prospective content moderation partner.

Conclusion: The Dangers of Crowdsourcing Content Moderation

If it’s important to your brand that your moderators are accountable, unbiased, and educated about the technical aspects of content moderation, then crowdsourcing for image moderation is NOT for you.

By partnering with a professional image moderation service staffed by highly-trained content moderators in a safe, controlled atmosphere, you can be certain that images are not stolen, a strict moderation criteria is maintained, and false positives and negatives are eliminated. Trusting your brand with a combination of live teams and AI moderation rather than a crowdsourced laborer will go a long way in preventing mistrust among stakeholders and ensuring that your brand’s image is protected as well.

The post The Dangers of Crowdsourcing Content Moderation appeared first on ValueWalk.

© ValueWalk