Skip to main content
Cloudy icon
43º

Why social media is being blamed for fueling riots in France

1 / 2

Copyright 2023 The Associated Press. All rights reserved.

A demonstrator runs on the third night of protests sparked by the fatal police shooting of a 17-year-old driver in the Paris suburb of Nanterre, France, Friday, June 30, 2023. Social media companies are once again under the spotlight, this time in France as the countrys president blames TikTok, Snapchat and other platforms for helping fuel widespread riots over the fatal police shooting of a 17-year-old driver. (AP Photo/Aurelien Morissard)

Social media companies are once again under scrutiny, this time in France as the country’s president blames TikTok, Snapchat and other platforms for helping fuel widespread riots over the fatal police shooting of a 17-year-old driver.

On Friday, French President Emmanuel Macron accused social media of playing a “considerable role” in encouraging copycat acts of violence as the country tries to tamp down protests that surfaced long-simmering tensions between police and young people in the country.

Recommended Videos



French Interior Minister Gerald Darmanin said police made 917 arrests on Thursday alone. More than 300 police officers have also been injured attempting to quell the rioting over the death of the teenager, who is of north African descent and has only has been identified by his first name, Nahel.

Macron, who in tandem castigated video games for the rioting, said the French government would work with social media sites to take down “the most sensitive content” and identify users who “call for disorder or exacerbate the violence.”

WHY IS THE FRENCH GOVERNMENT CONCERNED?

A French official, speaking anonymously in line with the presidency’s customary practices, cited an example of the name and address of the police officer who shot at Nahel being published on social media. A prison officer also has seen his professional card going online, the official said, suggesting it could put the person’s life and family at risk.

During his speech on Friday, Macron did not specify what type of content he viewed as “sensitive,” but he said he expected “a spirit of responsibility” from the social media platforms.

Talks between the government and social media platform, including Snapchat and Twitter, have started with the aim to speed up the process to remove content inciting to violence, the official said. The French government is also pushing for identifying people who launch calls for violence but it’s still at the “discussion” stage.

Darmanin said that in a meeting with social networks, he’d delivered a warning that they can’t allow themselves to be used as channels for calls to violence.

“They were very cooperative,” he said. “We’ll see tonight if they really are."

Darmanin said on Friday that French authorities will provide social media companies with "as much information as possible” so that, in return, they get identities of people who incite violence, adding that authorities will “pursue every person who uses these social networks to commit violent acts.”

He also said that the country will take “all necessary measures if we become aware that social networks, whoever they are, don’t respect the law.”

WHAT DOES FRENCH LAW SAY?

France has a law against cyber harassment. Online threats of crimes, like rape and murder, as well as online insults can be prosecuted.

But in reality, it’s very rare.

In 2020, the country’s parliament approved a bill that would compel platforms and search engines to remove prohibited content within 24 hours. A year later, a French court convicted 11 of 13 people charged with harassing and threatening a teenager who harshly criticized Islam in online post. But the people charged were only those who could be tracked down.

WHAT ARE SOCIAL MEDIA SITES SAYING?

Rachel Racusen, spokesperson for Snapchat, one of the social media platforms Macron blamed for contributing to the upheaval, said that since Tuesday, it has increased its moderation to detect and act on content related to the riots in France.

“Violence has devastating consequences and we have zero tolerance for content that promotes or incites hatred or violent behavior on any part of Snapchat,” Racusen said. “We proactively moderate this type of content and when we find it, we remove it and take appropriate action. We do allow content that is factually reporting on the situation.”

But many of the others are keeping mum. TikTok as well as Meta, which owns Facebook and Instagram, did not immediately reply for comment on Friday. Twitter answered only with an automated reply of a poop emoji, as it has done for months under Elon Musk’s tenure.

HOW DO SOCIAL MEDIA PLATFORMS TYPICALLY RESPOND?

Social media platforms like TikTok, Snapchat and Twitter often police people calling for violence because it can go against their policies.

But they also remove material posted on their platforms in order to comply with local laws and government requests, some of which can be controversial. A recent example was Twitter's decision in May to censor speech at the behest of Turkey's government in the leadup of the country's presidential elections.

Snapchat says on its website that it cooperates with law enforcement and government agencies to fulfill “valid requests” for information that can help during investigations.

The company receives many requests year-round. Its latest transparency report for the second half of 2022 showed it received the most requests from the U.S. government, followed by the United Kingdom, Canada and Germany. Officials in France put in 100 emergency requests for user information that includes some identifiers for accounts, such as email address and phone number. The company said it produced “some data” in 54% of those requests.

During the same period, TikTok's transparency report showed it got far less requests — under 20 — from the French government. It removed or restricted content — or accounts — for 86% of those requests.

Hany Farid, a digital forensics expert at the University of California, Berkeley who stepped down in January from TikTok’s U.S. content advisory council, said if a government asks for a specific piece of content to be taken down because it violates local law, most platforms will comply.

But he said the feasibility of requests also depends on the platform, as well as the breadth and rationale for the request. If a government “asks for a broad takedown of tens of thousands of pieces of content, then this may be met with more resistance,” Farid said.

Emma Llansó, director of the Center for Democracy & Technology’s Free Expression Project, says that although it’s appropriate for online services to remove speech that legitimately incites violence, they should tread carefully, especially on requests that can be sweeping and overly broad.

During passionate political debate and public outcry, Llansó said people might use very heated language or “use allusions to violence” without having any intent to actually incite or commit violent acts.

“What the young people in France are doing right now is protesting against state violence, which is a crucial kind of political activity,” Llansó said. “And so, how social media companies respond in this moment is really influential over people being able to find their political voice. It’s an incredibly difficult line to walk.”

___

Nicolas Vaux-Montagny in Lyon, France; Sylvie Corbet and John Leicester in Paris; and Barbara Ortutay in San Francisco contributed to this report.


Recommended Videos