According to a report by the Financial Times (opens in new tab), Google has been working on a tool that can help moderate extremist material for smaller businesses such as start-ups which might not have the resources to do so.
The in-house project, worked on by Google’s Jigsaw division, which is tasked with challenging threats to open societies, has been developed in conjunction with the UN-backed Tech Against Terrorism.
Google says the initiative is designed to help moderators detect and remove potentially illegal content, including racist and other hateful comments, from a website.
The project has been made possible by a database of terrorist items provided by the Global Internet Forum, founded by a collection of tech giants including the likes of Google, Meta, Microsoft, and Twitter.
It’s designed specifically to support smaller companies that are unable to afford the resources needed for effective moderation, be it large teams of workers or expensive AI tools.
It’s a tool that’s expected to be valuable in a time where extremists who have been banned from major networks are choosing smaller platforms to express their views. It also serves as a protective measure for companies reacting to the EU’s Digital Services Act and the UK’s upcoming Online Safety bill, which shall impose penalties on companies who fail to remove such content.
For now, it seems that it will operate on an opt-in basis, meaning that companies whose primary intention it is to harbour such messages will continue to do so even in the face of potential fines.
It is believed that two (unnamed) companies will test out the code later this year, indicating that a full roll-out is still some time away.
Elsewhere, Meta has rolled out its own tool which it calls Hasher-Matcher-Actioner (HMA). Like Jigsaw’s project, it has been designed to prevent the spread of hateful content, and builds on the platform’s existing video and photo content moderation tools.