WASHINGTON, Feb. 2, 2023 — The sheer amount of online content created daily is likely to drive platforms to increasingly rely on artificial intelligence for content moderation, making understanding the technology’s limitations critical, according to an industry expert.
Despite the ongoing culture war over content moderation, the practice is largely driven by financial incentives – so it’s likely that even companies with a “speech-maximizing set of values” will find some measure of moderation unavoidable. Alex FirstCEO of Murmuration Labs, on Jan. 25 American Enterprise Institute event. Murmuration Labs works with technology companies to develop online safety and trust products, policies, and processes.
If a piece of online content has the potential to result in hundreds of thousands of dollars in legal fees, First said, the company “has a huge incentive to err on the side of removing things.” Even outside the scope of legal liability, if the presence of certain content will alienate a large number of users and advertisers, companies have a financial incentive to remove it.
However, the main challenge facing content moderation is the overwhelming amount of online content that is generated by users – which includes, on average, 500 million new tweetsAnd 700 million comments on Facebook And 720,000 hours of video Uploaded to YouTube.
“The full cost of running a platform includes making millions of speech judgments a day,” First said.
“If you think about the enormity of that cost, you quickly get to the point of, ‘Even if we were doing very skilled outsourcing with great precision, we would need automation to make a number of day-to-day judgments that they seem to need in order to process all the talk that everyone else puts on. The Internet and all the conflicts that arise.”
Automated moderation is not just a theoretical future question. At the March 2021 congressional hearing, Meta CEO Mark Zuckerberg He testified that “more than 95 percent of the hate speech we remove is done by an AI rather than by a person…and I think it’s 98 or 99 percent of the terrorist content.”
Handling personal content
But while AI can help manage the amount of user-generated content, it can’t solve one of the main problems with moderation: Other than a limited amount of obviously illegal material, most decisions are subjective.
First said that much of the controversy surrounding automated content moderation mistakenly presents subjective issues as accuracy issues.
For example, much of what is generally considered “hate speech” is not technically illegal, but many platforms’ terms of service prohibit such content. With these non-judicial rules, there is often room for widespread disagreement as to whether any particular piece of content is infringing.
“Artificial intelligence cannot solve the problem of human subjective disagreement,” First said. “All he can do is compound this problem more efficiently.”
This multiplier becomes a problem when AI models replicate and amplify human biases, which were the basis of the FTC June 2022 report Congressional warning to avoid overreliance on artificial intelligence.
“No one should treat AI as the solution to the spread of harmful content online,” he said. Samuel Levin, director of the Federal Trade Commission’s Bureau of Consumer Protection, in a statement announcing the report. “Fighting online harm requires a broad societal effort, not an overly optimistic belief that new technology — which can be both useful and dangerous — will take these problems off our hands.”
The FTC report cited multiple studies exposing bias in automated models for detecting hate speech, often as a result of training on unrepresentative and discriminatory datasets.
As moderation processes become increasingly automated, First predicted, “the trend of those problems being exaggerated and less recognizable seems very likely.”
Given these risks, First emphasized the urgency of understanding and then working on the limitations of AI, noting that the demand for content moderation will not go away. To some extent, the differences of speech are “mere mortals…you’re not going to reduce it to zero,” he said.