Facebook is using AI to help its content moderators

The company previously stated that it’s taken action against 9.6 million pieces of content in the first quarter of 2020 — a significant increase over the 5.7 million in the quarter prior. While some of those posts are obvious enough to lead to automatic blocking or removal, the rest are entered into a queue for human moderators to evaluate. The process of determining if content is harmful can lead to mental health issues, and earlier this year Facebook settled a case with about 11,000 of its moderators with a $52 million payout. It also promised to update its content moderation software, muting audio by default and showing videos in black and white.

With Facebook continuing to be the forum by which many people in the world communicate with their friends and family, its ability to react to fake and hateful content is crucial to keeping its platform safe.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.


Comments

126
Shares

Read More

Cherlynn Low