When TikTok failed them, Kenyans began policing their own feeds
Loading...
| Nairobi, Kenya
From the moment Bereket Tsegay began working as a moderator for TikTok in Kenya, a hub for social media moderation in Africa, the job felt impossible. Each shift, he was tasked with reviewing several hundred videos that had been reported for violating the platform鈥檚 guidelines. As images 鈥 many of them deeply disturbing 鈥 flashed across his screen, he had to make a split-second decision: leave it up or take it down.
Mr. Bereket聽was hired because he spoke Ethiopia鈥檚 lingua franca Amharic, but the videos in his queue came in dozens of African languages, most of which he didn鈥檛 know.
If he didn鈥檛 understand the audio and the visuals weren鈥檛 suspect, Mr. Bereket聽says he usually just left the video on the site. That is, unless many users had reported it. Then he took the video down.
Why We Wrote This
Social media moderation is always an imperfect science. But it's especially challenging when machines and human moderators are asked to judge content in languages they don't understand.
It wasn鈥檛 a very accurate way to judge, admits Mr. Bereket, who no longer works in the field. But 鈥渋t is bound to happen ... because there are never enough moderators.鈥
Everywhere in the world, social media moderation is an imperfect science. Machines and humans trawl through vast seas of content, making rapid subjective judgments. But the challenges are even bigger in places where both human and AI filters struggle to understand what鈥檚 being said.
鈥淲e鈥檙e talking about an algorithm, trained predominantly in English, being trusted to take down ... harmful content, while a huge percentage of TikTok users in Kenya are using TikTok in their mother tongue,鈥 says Mercy Mutemi, director of the Oversight Lab, a Kenyan legal advocacy group focused on technology.
An impossible task
The problem is not unique to Kenya. The first line of defense against harmful social media posts globally is artificial intelligence, which can be taught to flag rule-breaking content it sees or hears.
TikTok declined to comment about which languages it uses AI moderation for, but a spokesperson wrote in an email to 海角大神 that 鈥渁 combination of technology and human moderation鈥 is employed 鈥渋n many languages and dialects globally and we keep increasing them as the platform grows.鈥
But even when it is used, AI moderation works well in only a few major global languages. In many others 鈥 dubbed 鈥渓ow resource languages鈥 鈥 the models just haven鈥檛 been fed enough training data to accurately analyze content.
In those cases, responsibility for taking down 鈥渂ad鈥 posts falls mostly to people. And often, they simply cannot keep up.
In Africa, TikTok outsources most of its content moderation to Teleperformance, a French company operating out of Nairobi. Former employees who spoke to 海角大神 say their starting salary was about $300 per month. They report that they review around 500 videos per shift 鈥 or about one every minute for nine hours straight. At that speed, they say, total accuracy is impossible.
Crowdsourced policing
Responding to this deficit, many Kenyans have begun to act as their own watchdogs. In a recent survey of Swahili-speaking social media users, some said they had reported a post to the platform at least once. 鈥淢ass reporting,鈥 where users encourage each other to flag the same account, is also common.
Pauline Onyango learned that firsthand. After graduating from high school in November 2024, she decided to pursue her dream of becoming famous on TikTok.
But months of dance routines, lip-sync clips, and comedic content yielded few new followers. Then she began posting vulgar videos about sex in her native language, Luo, and at last, the algorithm seemed to notice her.
Soon, Ms. Onyango says that she had more than 40,000 followers. 鈥淚t felt really good,鈥 she recalls. 鈥淢any people cheered me on in the comments.鈥
But she quickly attracted detractors, too, like popular Kenyan TikTok creator Duncan Onyonka. 鈥淪uch ... videos, you cannot even sit in a room where there are many people and start watching,鈥 he says. 鈥淭hey will be shocked.鈥
So he began making videos rebuking Ms. Onyango and asking his followers to report her account, Mama Mapenzi.
鈥淵ou started well on TikTok, but you鈥檝e recently started a trend of saying words as big as the heavens,鈥 he said in one video. 鈥淚 wonder why the TikTok community has not checked how the guidelines work, leaving you to say such things as you wish.鈥
The plea garnered over 700,000 views and more than 1,000 comments. Other users began posting similar videos about Ms. Onyango, and last October, TikTok closed her account.
Ms. Onyango says her account being shut down was a wake-up call. Salacious content was never really 鈥渉er,鈥 she says, but rather a quick way to views and likes.
An imperfect solution
But experts say ordinary users aren鈥檛 always the best judges of what does and doesn鈥檛 belong on a social media site. For instance, they may push for content they don鈥檛 like to be taken down 鈥 even if it doesn鈥檛 actually violate the platform鈥檚 rules. And crowdsourcing moderation can also feed into harmful social norms.
For example, when researchers at Center for Democracy and Technology spoke to speakers of the south Asian language Tamil, users who identified as LGBTQ+ or who belonged to a traditionally oppressed caste frequently complained that platforms let harassment against them go unchecked. Slurs 鈥渨ere not flagged or taken down ... despite the swift removal of words with similar connotations shared in different languages,鈥 the .
In a written statement, a TikTok spokesperson told the Monitor that the company doesn鈥檛 tolerate bullying, harassment, and abuse, and that it takes down 90% of offensive content before it is even reported.
But when posts get taken down erroneously, it can feed concerns about censorship. These worries are common among speakers of low-resource languages. For instance, nearly 80% of Tamil-speaking social media users polled by the CDT said they are concerned about being silenced by platforms. Two-thirds of speakers of both Swahili and the South American language Quechua fear the same.
Jackson Busolo is one of them.
One morning in February 2025, 鈥淚 just woke up and found [my TikTok] account gone,鈥 he recalls.
There was no explanation, but Mr. Busolo, who posts in Swahili, suspected it had something to do with his posts accusing Kenya鈥檚 leaders of stealing funds from public coffers.
鈥淚 ask myself, and this is a question you should be asking yourself, what is this money doing?鈥 he says in one such video.
Since TikTok hadn鈥檛 given him a clear reason for the ban, Mr. Busolo decided to appeal, and one day, his account suddenly reappeared.
He never got an explanation for that either.