Alexandra Evans, Head of Safety Public Policy, Europe, TikTok: 'We're always working to identify ways in which that we can do better'

TikTok has unveiled a new space designed to educate young users about the risks of dangerous online challenges, the fruit of a project that has long been in development at the Chinese social network, which considers this aspect a priority. We spoke to Alexandra Evans, Head of Safety Public Policy for Europe at TikTok, who explains how this project was born, indicates how the algorithm is involved in this matter and outlines the resources they're using to combat these harmful challenges.

ETX Studio: You're launching a new space dedicated to dangerous challenges. What kind of content can we expect to see there?

Alexandra Evans: So to give you some background, we did some research last year, with over 10,900 young people, teachers and parents to understand more about their awareness of, exposure to, participation in online challenges and hoaxes --  not just on TikTok, but across all platforms -- and also what they wanted in terms of better advice, what they felt the gaps were in terms of understanding and navigating these challenges. The good news is [the young people] are not participating in really high levels. Only 0.3% said that they had taken part in a challenge that they would categorize as really dangerous. But when we asked them what more they would like [in terms of help], they specifically said they want to know how far is too far. And that's really important because teenagers are only just developing their sense of self-regulation. They're still prone to risk taking, exploring their independence. It's a terrifying time as a parent, for sure, as one myself. So how can we respond to this declared need for understanding how to make a good choice between something that might be a bit risky but that you can do safely as long as you're thoughtful and something that is definitely dangerous and that you should avoid and not share and just report to us?

So we worked with a prevention scientist from Harvard University, a child psychiatrist, and a group of fantastic NGOs and academics from around the world to develop a four-step process where we encourage young people to stop, to think, decide and act. And we hope that this meets that need for a sort of framework for critical thinking. So in answer to your question, what will be on our safety center? The "stop, think, decide and act" process is front and center of it. But in addition, parents and teachers told us that they were worried about online challenges, but that they did not feel well equipped to have conversations. They didn't quite know how to breach this difficult subject. And so there's a dedicated section in there just for parents and educators and trusted adults as well, which I hope will be really helpful.

Why did you launch a space dedicated specifically to these dangerous challenges? 

I used to work for an NGO before I came to TikTok, and it was something that had really troubled the NGO community for a long time. We used to call it the Voldemort question. You know Voldemort, he who must not be named? The problem that we've had as NGOs was that we were being asked to give advice to parents and teens. But we also weren't sure whether or not we could name the challenge, because in naming the challenge, the negative aspect is that obviously you raise awareness, etc, you may peak curiosity. But on the other side, it's really hard to have a conversation if you can't name the thing. So it started from there. I became aware that progress was being stunted by the fact that we didn't have a clear consensus on what best practice advice looked like. And so it's wonderful to be somewhere like TikTok where we have resources, but also a huge level of commitment to doing everything we can to prioritize the safety of our users. So that was the starting point. And actually the project had two key objectives. The first was to make sure that everything we were doing was gold standard and to identify areas where we can improve.

But the other one was to enhance collective understanding. So as much as I'm excited about our own safety center and what we're doing online, I'm also really excited about all the ways in which that report is now informing other people's approaches across NGOs, educators, online platforms. I think that we have genuinely made a meaningful contribution to collective understanding.

How do you find out about new dangerous challenges that are trending among users?

There are different ways. Sometimes challenges are well established and migrate. So, for example, the "blackout challenges" are an example of something that was first reported on by the CDC in America in 1995. So when we heard that it was potentially on TikTok, it was quite easy for us to act. But then some challenges are more unpredictable. Unlike some forms of harmful content, it can be quite hard to imagine what is going to come next. And this content can take lots of different shapes and forms. So when we hear that something is potentially emerging on our platform or another platform, or we hear that there are media reports around something that is likely to then lead to searches on the platform, we will then activate our policy teams to review that and make a decision about where it sits on the harm spectrum. But certainly we're always working to identify ways in which we can do better at hitting that early in terms of detection and looking for more signals to hit those things early.

We know that algorithms tend to favor "negative" content because it is more viral. Could this mean that algorithms might have ended up pushing content relaying dangerous challenges because of their viral nature?

Well, two things on that: the first is that content around the most dangerous challenges is not allowed on TikTok. Even before we did this research, our community guidelines around what we do and do not allow for dangerous challenges were very well established. And they are, amongst our peers, some of the most conservative standards in terms of taking a real safety-first protective approach. So the first thing is that we would not allow challenges that can cause significant harm to appear on TikTok and certainly not in the For You feed. I would slightly push back on the premise that viral content is more likely to be successful if it's negative in tone. Every platform is unique, but TikTok is a place where our community is coming to be entertained and to find joy and to maybe escape from the complexities of their real lives and just to laugh and have fun. I use it just to switch off for a few minutes after a busy day. So the kind of content that people are convening around on TikTok is typically really joyful and really creative and fun. But certainly [when it comes to] our algorithm, in addition to preventing content that violates our community guidelines, we are always really careful about thinking about diversifying people's interests.

We want to make sure that the videos that you see do speak to your interests, but also that they are exposing you to different ideas, different perspectives, and different opportunities. Our videos are pretty short, 15 seconds to a minute, maybe a bit more. So strategically, it's important for us to build diversity of perspective into our algorithm from the start.

But even if you don't allow this type of content on your platform, if you don't know them, they can reach out to users and become viral.

We are always humble about the possibility that we will miss things or that we will need to go further on our platform. But often, when we hear reports of virality, of things trending and actually, when we investigate these phenomena on the platform, we don't see that there is this ubiquitous trend that is being described in the media. And I think that one of the points that came out in Dr. Hilton's report last year, which I think is worth dwelling on, is the role that well-intentioned sharing either peer to peer or in the media plays in escalating awareness of these challenges. But certainly we have an incredibly sophisticated detection strategy in place. We have a team of fantastic moderators and experts on this issue, and we also have this dedicated user reporting button. We've always been able to report dangerous challenges, but going forward, there is a dedicated reporting pathway for dangerous challenges, which again, I think is going to be another fantastic aspect of our strategy, which will enable us to detect these things as early as possible.

TikTok has come under fire for its approach to moderation. Do you think that TikTok is doing its best to fight against dangerous content on the platform?

Well, I can't speak for the other platforms. I can only speak for TikTok, but yes, as I said, I used to work for an NGO and before that I used to work for a content regulator. So for many years it has been my interest and my passion to think about how young people can be better protected online. And when I think of the work that we do at Tik Tok across our moderation strategies, but also the way that we implement age-appropriate design, like the decision to, for example, disable direct messages, our family pairing tools, safety and information advice, like the dangerous challenges page we're launching, I do think that I see a real thirst for taking a leadership position in our industry and it's a really exciting place to work because of that. I think that one thing that I probably would love to leave you with is the "stop, think, decide and act," which is this idea that I think illustrates the whole journey around our approach to safety. It's absolutely a company priority. This project was given a year's worth of dedicated resources to come to fruition. It's always data led, so we have young people, we hear from them, and that has determined our approach.

It has been expert led as well. We've had a fantastic group of experts both internally and externally, and it has led to meaningful change across all aspects of our strategy, including moderation. And as well, we've also ensured that we've shared what we've learnt with others so that others can benefit. And the result of that is that we will be sharing our brilliant safety resource. And of course, celebrating the three brilliant creators who work with us in France, Athenasol, Batzair and Daetienne. But for our users, what all that work ends up as is three really funny, super-cool videos from brilliant local creators. But actually behind it, the pedagogy and the expertise is really quite nuanced and sophisticated. So maybe that's a roundabout way of answering your question, but I think it goes to show that if everything is working together and we're all collaborating, the results can be both creative and effective, I hope, going forward.

© Agence France-Presse