Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.
The European Commission (EU) is ramping up pressure on tech companies to more aggressively use automated filtering to scrub “illegal” content from the Internet, a move that is drawing criticism from some lawyers and free speech activists in Silicon Valley.
In a communication issued Sept. 28, titled “Tackling Illegal Content Online,” the commission said it “strongly encourages online platforms to use voluntary, proactive measures” to pull down illegal content and to pour more money into “automatic detection technologies.”
Though the document is not a binding regulation or legislative proposal, the Commission makes clear that it will monitor the tech industry's response to its call for action and may take further steps — “including possible legislative measures” — by May 2018.
“Lawyers should be emphatically paying attention,” says Andrew Bridges, who represents tech firms in copyright disputes at Fenwick & West. “I think that any company that provides any kind of platform these days needs to be absolutely on top of this stuff.”
Bridges and digital rights advocates argue that implementing the commission's proposal would be too costly for tech companies — especially smaller startups — and chill free expression without effectively fixing the problems the EU is targeting.
The push by the EU seems to be part of a larger trend of placing more responsibility on online platforms — and not only in Europe. The U.S. Senate has also proposed creating a carve out for claims relating to sex trafficking in Section 230 — the Stop Enabling Sex Traffickers Act (SESTA) — which generally shields online intermediaries from liability over the content they host.
The focus of the EU communication is largely on hate speech and online material that incites terrorist violence. But it also explicitly references applying filtering technologies to target material that infringes intellectual property rights, like pirated movies and music.
European cities have been hit by a wave of terrorist violence over the past months, most recently in the UK and Spain. The release of the document by the commission, the EU's executive arm, comes after the heads of EU member state governments in late June adopted a statement saying they expect the industry to develop “new technology and tools to improve the automatic detection and removal of content that incites terrorist acts.”
But Daphne Keller, a former senior lawyer at Google who now is the director of intermediary liability at Stanford's Center for Internet and Society, warns that the commission proposal places too much confidence in the ability of technology to know what is “illegal.”
“The communication buys in wholeheartedly to the idea that expression can and should be policed by algorithms,” Keller wrote in a blog post on October 5. “The Commission's faith in machines or algorithms as arbiters of fundamental rights is not shared by technical experts.”
Pointing to a March 2017 paper co-authored by experts from Princeton's Computer Science Department and advocacy group Engine about the limits of online filtering, Keller added: “In principle, filters are supposed to detect when one piece of content — an image or a song, for example — is a duplicate of another. In practice, they sometimes can't even do that.” See, “The Limits of Filtering.”
The Commission doesn't call out any companies by name, but it describes the online platforms that it has in mind as “search engines, social networks, micro-blogging sites, or video-sharing platforms.” Its proposal almost surely has Google, Facebook, Twitter, and YouTube in mind.
Representatives for Google and Facebook declined to comment directly on the Commission's communication and instead pointed to public posts and comments previously made by company officials about fighting terrorism. Twitter did not respond to a request for comment.
To some degree, it appears that at least the major tech companies are already trying to respond to the call to be more active about filtering the material they host.
Google General Counsel Kent Walker, in a speech to the UN on Sept. 20, underscored the large volumes of footage that are uploaded to YouTube every hour and described efforts to pull down extremist videos more quickly — saying 75% of the videos that had been removed in recent months “were found using technology before they received a single human flag.”
Jeremy Malcolm, an attorney and senior global policy analyst at the Electronic Frontier Foundation, says the problem with using a fully automatic filter is that determining whether online content is illegal often depends on context; the one general exception is child pornography, he notes.
Malcolm gives the example of scholars posting terrorist videos online to work with other academics to analyze them. Filters would have a difficult time discerning between the two. “We normally recommend that there should be a court order to take something down,” Malcolm says, “and it should definitely not be an automated process doing that.”
Keller worries especially that it would be too difficult to restore legitimate content once it's been taken down, and cites data about how the counter-notice system has performed under the U.S. Digital Millennium Copyright Act (DMCA).
“A key takeaway is that while improper or questionable [takedown] notices are common — one older study found 31% questionable claims, another found 47% — reported rates of counter-notice were typically below 1%,” she wrote. “That's over 30 legally dubious notices for every one counter-notice.”
*****
Ben Hancock is a reporter for The Recorder, the San Francisco-based ALM affiliate publication of Internet Law & Strategy. He can be reached at [email protected]. On Twitter @benghancock.
ENJOY UNLIMITED ACCESS TO THE SINGLE SOURCE OF OBJECTIVE LEGAL ANALYSIS, PRACTICAL INSIGHTS, AND NEWS IN ENTERTAINMENT LAW.
Already a have an account? Sign In Now Log In Now
For enterprise-wide or corporate acess, please contact Customer Service at [email protected] or 877-256-2473
In June 2024, the First Department decided Huguenot LLC v. Megalith Capital Group Fund I, L.P., which resolved a question of liability for a group of condominium apartment buyers and in so doing, touched on a wide range of issues about how contracts can obligate purchasers of real property.
With each successive large-scale cyber attack, it is slowly becoming clear that ransomware attacks are targeting the critical infrastructure of the most powerful country on the planet. Understanding the strategy, and tactics of our opponents, as well as the strategy and the tactics we implement as a response are vital to victory.
Latham & Watkins helped the largest U.S. commercial real estate research company prevail in a breach-of-contract dispute in District of Columbia federal court.
Practical strategies to explore doing business with friends and social contacts in a way that respects relationships and maximizes opportunities.