Humans at the center of effective digital defense

Much content is benign, with adorable animals, envy-inspiring vacation shots, and enthusiastic reviews for bath pillows. Some of this content is dangerous, as it contains violent imagery, disinformation, harassment, and other harmful material. In the U.S., four in 10 Americans report they’ve been harassed online. In the U.K., 84% of internet users fear exposure to harmful content.
Consequently, content moderation–the monitoring of UGC–is essential for online experiences. Tarleton Gillespie, sociologe, writes in his book Custodians of the Internet that digital platforms must function despite the “utopian idea” of an open internet. He writes that “No platform is free from rules to some degree” and that it is impossible for any platform to not enforce them. “Platforms must, in some form or another, moderate: both to protect one user from another, or one group from its antagonists, and to remove the offensive, vile, or illegal–as well as to present their best face to new users, to their advertisers and partners, and to the public at large.”

Content moderation is used to address a wide range of content, across industries. Organizations can keep their users safe and their platforms functioning properly by using skilled content moderation. The best practices approach to content moderation uses more sophisticated and precise technical solutions, while supporting those efforts with human skillfulness and judgment.
Content moderating is a rapidly-growing industry that is critical to all individuals and organizations who gather in digital spaces (which would be more then 5 billion ).). According to Abhijnan Dasgupta, practice director specializing in trust and safety (T&S) at Everest Group, the industry was valued at roughly $7.5 billion in 2021–and experts anticipate that number will double by 2024. Gartner research suggests that nearly one-third (30%) of large companies will consider content moderation a top priority by 2024.
Content moderation: More than social media
Content moderators remove hundreds of thousands of pieces of problematic content every day. Facebook’s Community Standards Enforcement Report, for example, documents that in Q3 2022 alone, the company removed 23.2 million incidences of violent and graphic content and 10.6 million incidences of hate speech–in addition to 1.4 billion spam posts and 1.5 billion fake accounts. Social media is the most well-known example of UGC. However, many industries rely on UGC for everything from product reviews to customer service interactions.

“Any site that allows information to come in that’s not internally produced has a need for content moderation,” explains Mary L. Gray, a senior principal researcher at Microsoft Research who also serves on the faculty of the Luddy School of Informatics, Computing, and Engineering at Indiana University. Telehealth, gaming and e-commerce, as well as the public sector and government, all rely heavily upon content moderation.
In addition to removing offensive content, content moderation can detect and eliminate bots, identify and remove fake user profiles, address phony reviews and ratings, delete spam, police deceptive advertising, mitigate predatory content (especially that which targets minors), and facilitate safe two-way communications
in online messaging systems. Fraud is a major concern, especially on ecommerce platforms. “There are many bad actors and scammers trying sell fake products-and there’s a big problem using fake reviews,” Akash Pugalia, global president of trust & safety at Teleperformance, who provides non-egregious moderation support for global brands. “Content moderators help ensure products follow the platform’s guidelines, and they also remove prohibited goods.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by the editorial staff of MIT Technology Review.

I’m a journalist who specializes in investigative reporting and writing. I have written for the New York Times and other publications.