Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.
The European Commission (EU) is ramping up pressure on tech companies to more aggressively use automated filtering to scrub “illegal” content from the Internet, a move that is drawing criticism from some lawyers and free speech activists in Silicon Valley.
In a communication issued Sept. 28, titled “Tackling Illegal Content Online,” the commission said it “strongly encourages online platforms to use voluntary, proactive measures” to pull down illegal content and to pour more money into “automatic detection technologies.”
Though the document is not a binding regulation or legislative proposal, the Commission makes clear that it will monitor the tech industry's response to its call for action and may take further steps — “including possible legislative measures” — by May 2018.
“Lawyers should be emphatically paying attention,” says Andrew Bridges, who represents tech firms in copyright disputes at Fenwick & West. “I think that any company that provides any kind of platform these days needs to be absolutely on top of this stuff.”
Bridges and digital rights advocates argue that implementing the commission's proposal would be too costly for tech companies — especially smaller startups — and chill free expression without effectively fixing the problems the EU is targeting.
The push by the EU seems to be part of a larger trend of placing more responsibility on online platforms — and not only in Europe. The U.S. Senate has also proposed creating a carve out for claims relating to sex trafficking in Section 230 — the Stop Enabling Sex Traffickers Act (SESTA) — which generally shields online intermediaries from liability over the content they host.
The focus of the EU communication is largely on hate speech and online material that incites terrorist violence. But it also explicitly references applying filtering technologies to target material that infringes intellectual property rights, like pirated movies and music.
European cities have been hit by a wave of terrorist violence over the past months, most recently in the UK and Spain. The release of the document by the commission, the EU's executive arm, comes after the heads of EU member state governments in late June adopted a statement saying they expect the industry to develop “new technology and tools to improve the automatic detection and removal of content that incites terrorist acts.”
But Daphne Keller, a former senior lawyer at Google who now is the director of intermediary liability at Stanford's Center for Internet and Society, warns that the commission proposal places too much confidence in the ability of technology to know what is “illegal.”
“The communication buys in wholeheartedly to the idea that expression can and should be policed by algorithms,” Keller wrote in a blog post on October 5. “The Commission's faith in machines or algorithms as arbiters of fundamental rights is not shared by technical experts.”
Pointing to a March 2017 paper co-authored by experts from Princeton's Computer Science Department and advocacy group Engine about the limits of online filtering, Keller added: “In principle, filters are supposed to detect when one piece of content — an image or a song, for example — is a duplicate of another. In practice, they sometimes can't even do that.” See, “The Limits of Filtering.”
The Commission doesn't call out any companies by name, but it describes the online platforms that it has in mind as “search engines, social networks, micro-blogging sites, or video-sharing platforms.” Its proposal almost surely has Google, Facebook, Twitter, and YouTube in mind.
Representatives for Google and Facebook declined to comment directly on the Commission's communication and instead pointed to public posts and comments previously made by company officials about fighting terrorism. Twitter did not respond to a request for comment.
To some degree, it appears that at least the major tech companies are already trying to respond to the call to be more active about filtering the material they host.
Google General Counsel Kent Walker, in a speech to the UN on Sept. 20, underscored the large volumes of footage that are uploaded to YouTube every hour and described efforts to pull down extremist videos more quickly — saying 75% of the videos that had been removed in recent months “were found using technology before they received a single human flag.”
Jeremy Malcolm, an attorney and senior global policy analyst at the Electronic Frontier Foundation, says the problem with using a fully automatic filter is that determining whether online content is illegal often depends on context; the one general exception is child pornography, he notes.
Malcolm gives the example of scholars posting terrorist videos online to work with other academics to analyze them. Filters would have a difficult time discerning between the two. “We normally recommend that there should be a court order to take something down,” Malcolm says, “and it should definitely not be an automated process doing that.”
Keller worries especially that it would be too difficult to restore legitimate content once it's been taken down, and cites data about how the counter-notice system has performed under the U.S. Digital Millennium Copyright Act (DMCA).
“A key takeaway is that while improper or questionable [takedown] notices are common — one older study found 31% questionable claims, another found 47% — reported rates of counter-notice were typically below 1%,” she wrote. “That's over 30 legally dubious notices for every one counter-notice.”
*****
Ben Hancock is a reporter for The Recorder, the San Francisco-based ALM affiliate publication of Internet Law & Strategy. He can be reached at [email protected]. On Twitter @benghancock.
ENJOY UNLIMITED ACCESS TO THE SINGLE SOURCE OF OBJECTIVE LEGAL ANALYSIS, PRACTICAL INSIGHTS, AND NEWS IN ENTERTAINMENT LAW.
Already a have an account? Sign In Now Log In Now
For enterprise-wide or corporate acess, please contact Customer Service at [email protected] or 877-256-2473
Businesses have long embraced the use of computer technology in the workplace as a means of improving efficiency and productivity of their operations. In recent years, businesses have incorporated artificial intelligence and other automated and algorithmic technologies into their computer systems. This article provides an overview of the federal regulatory guidance and the state and local rules in place so far and suggests ways in which employers may wish to address these developments with policies and practices to reduce legal risk.
This two-part article dives into the massive shifts AI is bringing to Google Search and SEO and why traditional searches are no longer part of the solution for marketers. It’s not theoretical, it’s happening, and firms that adapt will come out ahead.
For decades, the Children’s Online Privacy Protection Act has been the only law to expressly address privacy for minors’ information other than student data. In the absence of more robust federal requirements, states are stepping in to regulate not only the processing of all minors’ data, but also online platforms used by teens and children.
In an era where the workplace is constantly evolving, law firms face unique challenges and opportunities in facilities management, real estate, and design. Across the industry, firms are reevaluating their office spaces to adapt to hybrid work models, prioritize collaboration, and enhance employee experience. Trends such as flexible seating, technology-driven planning, and the creation of multifunctional spaces are shaping the future of law firm offices.
Protection against unauthorized model distillation is an emerging issue within the longstanding theme of safeguarding intellectual property. This article examines the legal protections available under the current legal framework and explore why patents may serve as a crucial safeguard against unauthorized distillation.