Site icon WSJ-Crypto

Job Cuts Loom for UK Content Moderators as TikTok Embraces AI Transition

“`html

Hundreds of jobs in the UK are in jeopardy after TikTok confirmed its intentions to reorganize its content moderation operations and relocate work to other regions in Europe.

The social media behemoth, boasting over a billion users globally, indicated that this action is part of a worldwide restructuring of its Trust and Safety division and underscores its increasing dependence on artificial intelligence (AI) for content moderation.

A spokesperson for TikTok stated: “We are progressing with a reorganization that we initiated last year to enhance our global operational framework for Trust and Safety, which involves consolidating our operations in fewer locations worldwide.”

The Communication Workers Union (CWU) criticized the decision, claiming TikTok is “prioritizing corporate profit over the welfare of employees and the public.”

John Chadfield, CWU National Officer for Technology, remarked: “TikTok employees have long expressed concern regarding the real-life repercussions of downsizing human moderation teams in favor of rapidly conceived, underdeveloped AI substitutes.”

He mentioned that the announcement arrives “precisely when the company’s workers are about to vote on gaining union recognition.”

TikTok justified the reductions, stating that the adjustments would enhance “efficiency and speed” while minimizing the amount of distressing content that human reviewers are exposed to. The company noted that 85 percent of rule-violating posts are already eliminated automatically by AI systems.

Employees affected in London’s Trust and Safety division – along with several hundreds more in Asia – will have the opportunity to apply for other positions within TikTok and will receive preference if they fulfill the minimum criteria.

This restructuring is occurring as the UK tightens its regulation of social media platforms. The Online Safety Act, which became effective in July, enforces stricter mandates on technology companies to safeguard users and verify ages, with penalties reaching up to 10 percent of global turnover for non-compliance.

TikTok has rolled out new parental controls, including features to block specific accounts and oversee older teenagers’ privacy settings. However, the company continues to face backlash regarding child safety and data management practices. In March, the UK’s data regulatory body launched a “major investigation” into the platform.

TikTok asserted that its recommendation systems function under “rigorous and thorough protocols that ensure the privacy and safety of adolescents.”

The reductions underscore the escalating friction between efficiency and safety in the moderation of online content. While AI enables platforms to manage vast quantities of posts at scale, critics argue that human supervision is crucial to grasp context, subtleties, and emerging dangers.

For TikTok, this gamble occurs at a delicate moment. With regulators amplifying their scrutiny and unions mobilizing within the organization, the choice to cut human moderation heightens concerns about whether technology alone can safeguard users.


Jamie Young

Jamie is a Senior Reporter at Business Matters, contributing over a decade of expertise in UK SME business reporting.
Jamie possesses a degree in Business Administration and frequently engages in industry conferences and workshops.

When not covering the latest business news, Jamie is dedicated to mentoring emerging journalists and entrepreneurs to inspire the next generation of business leaders.





Source link
“`

Exit mobile version