Many of the videos that violate TikTok’s policies will be detected and removed through automation. Over the past year, the service has tweaked and tested systems to discover such content, and since then, they have taken it down. In the next few weeks, the systems will be rolled out in the United States and Canada. Starting with posts that violate policies that relate to the safety of minors, violence, graphic content, nudity, sex, illegal activity, or regulated goods, the algorithm will scan for posts that violate these policies. In the event that a violation has occurred, the system will immediately remove the video, and the user will have the opportunity to Additionally, users are still able to flag videos for manual review. According to TikTok, automatic reviews will only be reserved “for content categories with the highest degree of accuracy” of the company’s technology. Only one out of every 20 videos that have been removed were false positives, which should have still been on the site, the company said. In addition, TikTok cites “consistent requests to appeal videos’ removal” as a sign of progress toward improving the algorithmic levels.
In TikTok’s view, automation should reduce its safety staff’s workload so that they can tackle content that requires a more nuanced handling, such as videos containing bullying and harassment. A particularly important function of the system is that it could reduce the number of videos with potentially distressing content that safety officers are forced to watch, such as those that feature It has been claimed that Facebook does not do enough to protect the wellbeing and mental health of its content moderators who are responsible for removing often disturbing posts.
Furthermore, TikTok is changing how users are informed when they are caught breaking the terms. In addition to tracking how often, how severe, and how many violations occur, the platform now measures how often. In their inbox, users will find information about those updates in the account updates section. Besides viewing information about how long they will be suspended from posting or engaging with other people’s content, users can also find information about what the consequences of their actions will be.