In a tacit admission that its artificial intelligence still lags behind humans, Google said it would increase the number of people it has monitoring YouTube for offensive and extremist content to 10,000.

YouTube has faced a revolt by advertisers over ads paired with disturbing videos, such as those made by hate groups and religious extremists. Subsequent revelations that YouTube was offering cartoons featuring what the BBC called “animated violence and graphic toilet humor” added fuel to the controversy.

And it turned out that some cute videos posted by kids were attracting trolls of the worst kind, drawing “hundreds of pedophiliac comments, including encouragement to do lewd acts and links to child-abuse content,” according to the Wall Street Journal.

On Monday, YouTube CEO Susan Wojcicki said in a blog post that the company would continue increasing the number of people reviewing YouTube content to more than 10,000 next year. It was not immediately clear whether those would be contract workers or Google employees.

YouTube is more than a cesspool for haters and kooks, and Wojcicki reminded the world of that.

“Our open platform has been a force for creativity, learning and access to information,” she wrote.

However, Wojcicki wrote, “some bad actors are exploiting our openness to mislead, manipulate, harass or even harm.”

To combat the problem, YouTube has tightened its content policies, added to “enforcement teams” and invested in machine-learning technology, she wrote.

YouTube’s content gatekeepers have manually reviewed almost 2 million videos for “violent extremist content” since June, Wojcicki wrote.

“Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube.”