vlog

Skip to content
NOWCAST vlog News at 8am Saturday Morning
Watch on Demand
Advertisement

TikTok says it will strengthen policies in effort to prevent spread of hoaxes, dangerous challenges

TikTok says it will strengthen policies in effort to prevent spread of hoaxes, dangerous challenges
I don't think we've had a viral threat like this before. This widespread, especially on the back of a major school shooting that really sort of gripped the nation and made headlines for a long period of time. So it's something that young people are thinking about. It's something that we're as a parent. We're all thinking about. We're all anxious about, and then this happens. It's hard to know how that's going to play out. Perpetrators look online for other people who think like them, who are thinking that violence is an answer. Who are you angry like them, who have the same grievance and who blame the same people and they find others on social media. They find people in chat rooms and kind of on all these dark corners of the Internet. You used to not be able to do that. You'd have to actually find a person sitting in class next to you. Now you have access to all these people Now, the addition of that tiktok threat that we saw was so incredibly concerning because a it's anonymous, you can't trace it, so schools don't know. Um, so I think a lot of districts in a lot of schools chose to overreact rather than under react because of what they just saw play out at Oxford. If you notice anything that feels wrong to you or that makes you feel uncomfortable or if anyone you close to you is acting differently in a way that concerns you, that's something to bring up to a trusted adult. And it doesn't need to be your reporting them because you think they're going to be a school shooter, right? You're reporting them because you're worried about them and you think that they need some resources and they need some help.
Advertisement
TikTok says it will strengthen policies in effort to prevent spread of hoaxes, dangerous challenges
TikTok announced Tuesday that it will strengthen efforts to regulate dangerous content, including harmful hoaxes and content that promotes eating disorders and hateful ideologies. The platform plans to use creators to spread awareness of negative hoaxes, broaden the scope of banned eating disorder content and add clarity on prohibited speech and behaviors.The move comes after a viral hoax originating on the platform caused very real fear last year. In December, a TikTok trend warned of forthcoming real-world violence in schools. While the threats were vague, they resulted in school shutdowns across the United States.Hoaxes about public figures' deaths have gone viral on the platform, as well as false rumors that there were men planning a "National Rape Day." Representatives from TikTok, Snap and Youtube were all questioned at a congressional hearing on online safety for children last October, leading to a flurry of safety updates."Our policies are designed to foster an experience that prioritizes safety, inclusion, and authenticity," said Corman Keenan, head of trust and safety at TikTok, in a press release announcing the new policy. "They take into account emerging trends or threats observed across the internet and on our platform." As part of updates, TikTok announced a new "dangerous acts and challenges" policy category after previously lumping such content in with suicide and self-harm. As part of its efforts, the platform says it will ask creators to create videos asking their followers to follow specific steps when viewing content: stop, think, decide and act. This campaign is meant to inspire young users to pause on challenge videos, consider if the content is harmful, make a decision and then report any deemed dangerous. The platform is also asking creators to spread the #SaferTogether hashtag to promote awareness of risky content.Beyond its reclassification, the company did not provide specific information on how its approach to hoax moderation will change given the new guidelines.Another update includes a broadened approach to moderation of disordered eating content. In the statement, Keenan also reiterated the standing ban of videos featuring deadnaming, misgendering, misogyny and conversion therapy."Our moderation teams work very diligently to swiftly and expeditiously remove that violative content and redirect hashtags when someone might be searching for it in our search page. For example, if you search for something that we've determined is a dangerous challenge, you won't find content around that," said TikTok head of US safety Eric Han in a conversation on internet safety hosted by Axios on Tuesday. Han stressed that users need to be aware of the risk of viral trends and online hoaxes, urging TikTokers to approach videos seen online with a questioning eye.These efforts are part of TikTok's larger approach to content moderation, which includes a combination of technology and people to review videos and enforce community guidelines. The news of the updated safety guidelines comes as the platform says it will invest in new ways to rank the age-appropriateness of content on the platform.

TikTok announced Tuesday that it will strengthen efforts to regulate dangerous content, including harmful hoaxes and content that promotes eating disorders and hateful ideologies. The platform plans to use creators to spread awareness of negative hoaxes, broaden the scope of banned eating disorder content and add clarity on prohibited speech and behaviors.

The move comes after a viral hoax originating on the platform caused very real fear last year. In December, a TikTok trend in schools. While the threats were vague, they resulted in school shutdowns across the United States.

Advertisement

Hoaxes about public figures' deaths have gone viral on the platform, as well as false rumors that there were men planning a "National Rape Day." Representatives from TikTok, Snap and Youtube were all questioned at a on online safety for children last October, leading to a flurry of safety updates.

"Our policies are designed to foster an experience that prioritizes safety, inclusion, and authenticity," said Corman Keenan, head of trust and safety at TikTok, in a press release announcing the new policy. "They take into account emerging trends or threats observed across the internet and on our platform."

As part of updates, TikTok announced a new "dangerous acts and challenges" policy category after previously lumping such content in with suicide and self-harm. As part of its efforts, the platform says it will ask creators to create videos asking their followers to follow specific steps when viewing content: stop, think, decide and act. This campaign is meant to inspire young users to pause on challenge videos, consider if the content is harmful, make a decision and then report any deemed dangerous. The platform is also asking creators to spread the #SaferTogether hashtag to promote awareness of risky content.

Beyond its reclassification, the company did not provide specific information on how its approach to hoax moderation will change given the new guidelines.

Another update includes a broadened approach to moderation of disordered eating content. In the statement, Keenan also reiterated the standing ban of videos featuring deadnaming, misgendering, misogyny and conversion therapy.

"Our moderation teams work very diligently to swiftly and expeditiously remove that violative content and redirect hashtags when someone might be searching for it in our search page. For example, if you search for something that we've determined is a dangerous challenge, you won't find content around that," said TikTok head of US safety Eric Han in a conversation on internet safety hosted by Axios on Tuesday. Han stressed that users need to be aware of the risk of viral trends and online hoaxes, urging TikTokers to approach videos seen online with a questioning eye.

These efforts are part of TikTok's larger approach to content moderation, which includes a combination of technology and people to review videos and enforce community guidelines. The news of the updated safety guidelines comes as the platform says it will invest in new ways to rank the age-appropriateness of content on the platform.