Facebook leaves the priorities of human moderators in the hands of an artificial intelligence



When a user marks content on Facebook as suspicious, for whatever reason, a moderation process is triggered. This publication is moderated automatically, when the system detects a clear violation, or they accumulate in a queue of content to be moderated by humans.



A process, the latter, that will change with the latest movement of Facebook in its commitment to automation: machine learning will take care of prioritizing that queue and placing those most important at the top. A post reported for spam, for example, will have the lowest priority.






The use of artificial intelligence will go beyond prioritizing the most important, without specifying exactly what it is, weighing virality, severity and probability of rule violation




Artificial intelligence as a complement to human moderation




Illustration about Facebook



Advances it The Verge, explaining that in the future the use of artificial intelligence will go beyond prioritizing the most important. A set of algorithms will be in charge of prioritizing based on virality, severity and the probability of violation of the rules of the contents that have been reported.



The idea is that Facebook's 15,000 human moderators deal first with content that requires faster attention for different reasons, being potentially more harmful, while leaving for later those publications that due to their characteristics can wait. Content that has an impact in the real world, for example, such as those related to terrorism, will be prioritized.




"All content violations will continue to receive substantial human review, but we will use this system to better prioritize."










Depressions and poor working conditions: Several former Facebook employees talk about their experience moderating content





And although artificial intelligence continues to gain prominence and assuming tasks, those responsible have endeavored to underline that content violations "will continue to receive substantial human review". A wise approach, considering the hilarious problems that their algorithms sometimes cause, which coincides with the conclusions reached by other platforms such as YouTube, which have also used algorithms to moderate.



During the first months of the pandemic, Google's video platform found that automatic moderation was causing problems. I was being too jealous and about 160,000 videos were removed for no reason. For this reason, humans returned to make decisions with nuances and depending on the specific case. AI also has its limitations.



Comments