Skip to content


TikTok is bringing in external experts in Europe in areas such as child safety, youth mental health and extremism to form a Safety Advisory Board to help moderate content in the region.

The move, announced today, follows an emergency intervention by the Italian data protection authority in January – which ordered TikTok to block users whose age it cannot verify after the death of ‘a girl who was reported by local media to have died of asphyxiation as a result of her participation. in a black out challenge on the video sharing platform.

The social media platform has also been the target of a series of complaints coordinated by EU consumer protection agencies, which last month released two reports detailing a number of alleged breaches of protection rules. consumer and block privacy – including specific child safety concerns.

“We are constantly reviewing our existing features and policies, and innovating to take bold new steps to prioritize security,” TikTok writes today, putting a positive spin on the need for improved security on its platform. In the region.

“The Council will bring together leaders from academia and civil society from across Europe. Each member brings a different and fresh perspective on the challenges we face and members will provide subject matter expertise while advising on our content moderation policies and practices. Not only will they help us develop forward-looking policies that address the challenges we face today, but they will also help us identify emerging issues that will affect TikTok and our community in the future.

This is not the first such advisory body launched by TikTok. A year ago, he announced a US Security Advisory Board, after coming under close scrutiny by US lawmakers concerned about the spread of election misinformation and broader data security concerns, including accusations according to which the Chinese-owned app was engaging in censorship at the behest of the Chinese government.

But the first people appointed to TikTok’s European content moderation advisory body suggest its regional focus is more firmly on child safety / youth mental health and extremism and hate speech, reflecting some of the main areas in which it is most closely monitored by European lawmakers and regulators. and civil society so far.

TikTok has appointed nine people to its European Council (listed here) – initially bringing in external expertise in the fight against bullying, youth mental health and digital parenting; online sexual exploitation / abuse of children; extremism and deradicalization; anti-bias / discrimination and hate crimes – a cohort that she says will grow as she adds more members to the organization (“from more countries and different areas of expertise to support us in the future”).

TikTok is also likely to have an eye on the upcoming pan-European new regulations for platforms operating in the region.

European lawmakers recently presented a legislative proposal that aims to strengthen the accountability of digital service providers for the content they broadcast and monetize. The Digital Services Act, which is currently in draft and passes through the Union’s co-legislative process, will regulate how a wide range of platforms must act to remove explicitly illegal content (such as speech hatred and sexual exploitation of children).

The Commission’s DSA proposal avoided setting specific rules for platforms to tackle a wider range of harms – such as issues such as young people’s mental health – than, on the other hand, the UK Uni proposes to address in its social media regulatory plan). However, the envisaged legislation aims to strengthen accountability around digital services in various ways.

For example, it contains provisions that would require larger platforms – a category TikTok would most likely fall into – to provide data to external researchers so they can study the societal impacts of the services. It is not hard to imagine this provision leading to groundbreaking (independent) research into the mental health effects of attention-grabbing services. So the prospect is that the platforms’ own data could end up translating into negative public relations for their services – that is, if they fail to create a safe environment for users.

Before the implementation of this surveillance regime, the platforms increased their incentive to increase their action with civil society in Europe so that they are better placed to skate towards the destination of the puck.



Source link