LILLE, France—France and Germany demanded that U.S. tech companies help them police terrorism on the Internet, escalating European efforts to wrangle more law-enforcement help from Silicon Valley.
Top law-enforcement officials from the two countries said on Tuesday they expect U.S. Internet and social-networking companies like Twitter Inc., Facebook Inc. and Google Inc. to pre-emptively remove terror content from their services—or face new laws aimed at forcing them to do so.
They join the U.K., which has for months been pressing Internet firms to take a more proactive role in removing extremist content, including material that isn’t necessarily illegal, such as videos of sermons by radical preachers or posts by extremists encouraging Westerners to join the fight in Syria.
“Just because the vast majority of this content is found on American services doesn’t reduce their impact on French people,” French Interior Minister Bernard Cazeneuve said at a cybersecurity conference. “We won’t succeed in our fight against terrorism unless Internet actors start taking responsibility.”
German Interior Minister Thomas de Maizière echoed that call at the same conference. “The less people take responsibility, the more legislators will be forced to take the initiative,” he said.
The European demands for pre-emptive filtering escalate tensions between U.S. tech firms and governments around the world. At stake is where these global firms draw the line of acceptable discussion, and how far they must go to enforce local laws limiting online speech.
In some ways, the request marks a reversal for European politicians who previously criticized technology firms for being too close to police forces and spy agencies—especially in the U.S.—following leaks from National Security Agency contractor Edward Snowden. Following the attacks in Paris, those politics appear to have shifted, U.S. technology executives point out.
It isn’t clear how far European governments will go to push the firms. One person at a U.S. tech firm suggested the U.K., France and Germany are pushing for faster responses “just to appear tough on terrorism.”
France says the menace of online calls for terrorism—both to intimidate foes and recruit adherents—has grown significantly. Mr. Cazeneuve last year pushed for a new law to allow the French government to block websites that don’t remove certain content that expresses sympathy with terrorism.
French prosecutors on Tuesday recommended preliminary charges against four men they suspect of assisting Amedy Coulibaly, one of the gunmen who rampaged through Paris earlier this month.
In the two weeks since the terror attacks killed 17 people in France, police working in round-the-clock shifts at a center outside Paris have flagged and requested the deletion of more than 25,000 pieces of content that expressed support for terrorist groups. “It’s a major issue,” Mr. Cazeneuve said.
Hacker groups linked with Islamist organizations in Syria and elsewhere have claimed nearly 1,300 cyberattacks in recent weeks, aimed at knocking French websites offline or defacing them with messages supporting terror groups or the attacks in France, French officials say.
On Tuesday, French newspaper Le Monde said it had been the subject of an unsuccessful attempt by the militant group Islamic State to take control of its publishing tools.
“This is something we’ve never seen before,” Vice Adm. Arnaud Coustillière, head of cyberdefense for the French army, said of the Le Monde attack, which he described as sophisticated.
U.S. technology executives don’t want to discuss the issue of pre-emptive filtering publicly because of the regulatory fights it could prompt in certain countries. Privately, they say their main objection is that such a system would be unworkable, especially when trying to control for sarcasm and hyperbole.
But they also note they fear legal ramifications if suddenly the likes of Twitter and Facebook were to become digital police forces. “They then become liable for everything on their platform,” one technology executive said.
Tech firms also say they already cooperate closely with authorities outside the U.S., particularly in emergency situations related to terrorism, moving quickly to remove illegal content when they get valid requests. But while some acknowledge that certain laws could be refined, they generally argue against a broad legal overhaul.
Following the attacks at the office of French satirical magazine Charlie Hebdo on Jan. 7, Microsoft Corp. turned around a French police request for email content from two customer accounts in 45 minutes—handing over the emails to the U.S. Federal Bureau of Investigation at France’s request. “There are times, especially in emergency situations, when existing international legal processes work well,” said Brad Smith, Microsoft’s general counsel, in a speech in Brussels.
David Marcus, vice president for messaging products at Facebook, said at a conference in Munich this week that the company is constantly removing content that incites terrorism or recruits people to join terrorist organizations—including from Facebook’s own messaging app.
“Anything remotely connected to that is generally gone from the platform the minute we see it,” Mr. Marcus said.
But cooperation from U.S. tech companies goes only so far. In general, companies will turn over only limited personal information about users to a restricted set of U.S.-allied countries; more detailed requests are directed to the U.S. government. Companies are also reluctant to remove content that doesn’t violate their own terms or U.S. laws, law-enforcement and tech officials said.
“If there are requests from law enforcement we make sure they are real requests; if not, we fight back,” Mr. Marcus said. An executive at another U.S. technology firm said a request from France might more likely be met than one from, say, Saudi Arabia.
Pre-emptive filtering is a particularly difficult question for U.S. firms, which have long resisted calls to screen illegal content, such as copyright violations, rather than take it down piece by piece. But in certain areas, such as child pornography or links to viruses and other malware, they already do sometimes pre-emptively screen out content.
“On hate speech it’s been difficult to find a common ground,” said Eric Freyssinet, head of the Digital Crime Center of France’s Gendarmerie Nationale. “We ask these companies: Is this the kind of content you want to see on your platform?”
But demanding a policing role for companies poses its own set of problems.
“One of the obvious concerns is that if we effectively invite or expect technological firms to do the work of monitoring rather than doing it ourselves directly, they are working to fundamentally a different imperative—a commercial imperative—which is not necessarily always the same as those that we have in the police community, for example,” Rob Wainwright, director of Europol, the European Union’s policing agency, told U.K. lawmakers last week.
The Franco-German initiative comes amid a broader effort to pressure U.S. tech firms for more help in obtaining intelligence on alleged terrorists. A proposed new surveillance law in France would give the government more leeway to demand data on targets from U.S. firms. British Prime Minister David Cameron has also lobbied for stronger laws, and won U.S. President Barack Obama ’s support in pressuring tech firms to open up encryption to law enforcement.
The French push for new rules has unnerved civil liberties advocates and tech firms, particularly after the attack on Charlie Hebdo became a rallying point to support freedom of expression. They also worry that complying with the orders could set a bad precedent in countries like Turkey and Russia, where tech firms have clashed with authorities over orders to remove material.
“Recent legislative additions — some not yet in effect — give France one of the biggest legal arsenals in the world,” said ASIC, an association of tech firms that operate in France, including Facebook and Google. “Any new law or measure should respect all freedoms, both public and personal.”
Some French magistrates involved in anti-terrorism investigations also say rushing to close down websites with terrorism content could be counterproductive because tracking down people who connect to those sites can help authorities home in on suspects.
French authorities respond that free speech shouldn’t extend to inciting violence or denigrating classes of people, which Prime Minister Manuel Valls said last week were crimes.
Mr Cazeneuve on Tuesday said that his efforts are not intended to restrict online freedoms, adding that no one should endanger France’s “irrepressible love of liberty.”