U.K. officials face thorny task of defining ‘legal but harmful’ online speech
Officials in the United Kingdom last week formally introduced an aggressive plan to boost digital safety and crack down on online harms — a landmark bill that could usher in some of the tightest regulations in the world for how tech companies police content.
But for now, policymakers are tabling one of their toughest challenges: defining the “legal but harmful” speech companies will be required to tackle.
First unveiled as a draft proposal last May, the revamped Online Safety Bill outlines a slew of obligations tech companies would have to follow. That includes quickly reporting illegal activity on their services, such as child abuse material, and creating and consistently enforcing rules against “content harmful to adults that falls below the threshold” for a criminal offense, according to the office of U.K. Digital Secretary Nadine Dorries.
Dorries’s office said the definition of “legal but harmful content” could include “exposure to self-harm, harassment and eating disorders,” but noted that the exact categories will be outlined in a “secondary,” yet-to-be-unveiled measure. Both will require approval from Parliament.
This second bill epitomizes a contentious and fraught debate: whether governments should have any role in cracking down on things like disinformation, hate speech or harassment, or whether that should be left entirely to the private sector.
It’s a discussion that’s been gaining steam globally amid concerns that major social media platforms like Facebook, Instagram, TikTok and YouTube have not done enough to limit users’ exposure to harmful content, and that they at times even amplify such material.
But attempts to carve out a role for global regulators have met fierce opposition from free speech advocates, who warn that it could lead to mass censorship. Barbora Bukovská, senior director of law and policy at the London-based digital rights group Article 19, blasted U.K. policymakers for not dropping their plans to curb some “legal but harmful content.”
“Companies, faced with huge fines or even criminal liability for non-compliance, will be incentivised to act in a censorious manner, err on the side of caution and be heavy handed when it comes to removing content,” Bukovská said in a statement.
Seemingly in an attempt to address those concerns, U.K. policymakers dialed back some of the proposal’s original language requiring companies to vet whether even more content could meet the definition of “legal but harmful.”
Dorries’s office touted the change as a “further boost to freedom of expression online” and said that it “removes any incentives or pressure for platforms to over-remove legal content or controversial comments.” But it’s unlikely to assuage free speech advocates, who argue the entire premise behind that language is misguided at best.
These same questions have long puzzled lawmakers in the United States, which has a strong tradition of free expression and adherence to the First Amendment. That makes it tougher to legislate on the issue in the United States than perhaps any other Western country.
To get around concerns about U.S. regulators punishing companies for how they handle “legal but harmful” speech, Democratic lawmakers have largely sought to turn to the courts instead. To that end, they have introduced bills that would open tech companies up to more lawsuits over the content they host or amplify by rolling back their liability protections under Section 230.
A number of proposals have taken direct aim at content that leads to real-world harm. One bill backed by House Democratic leaders would open companies up to liability if they recommend content to users that contributes to physical or severe emotional injury.” Another led by Sen. Mark Warner (D-Va.) seeks to open companies up to lawsuits in cases dealing with cyberstalking, harassment or civil rights abuses on their services.
But those proposals have run into some of the same concerns from free speech advocates, who argue they would deal a major blow to free expression online.
Unlike in the United Kingdom, where some top conservatives have rallied around efforts to curtail “legal but harmful” speech, Republicans in the United States have strongly resisted such efforts.
That makes it unlikely that those provisions in the United Kingdom will serve as a major template for bipartisan legislation in the United States, unless the scope significantly narrows.
Still, the United Kingdom’s plan would mark one of the biggest global experiments in content regulation to date — and both policymakers and industry leaders will be closely tracking to see how officials overseas grapple with these thorny questions.
Ukraine has received thousands of Starlink antennas
Ukrainian officials and some experts have heaped praise on Tesla and SpaceX chief executive Elon Musk for SpaceX unit Starlink’s shipments of satellite Internet antennas, Rachel Lerman and Cat Zakrzewski report. The antennas have proven “very effective,” and Ukraine continues to get more every other day, Minister of Digital Transformation Mykhailo Fedorov said.
Musk told The Post to give his regards “to your puppet master Besos😘😘” when asked about Starlink and his past efforts. (Amazon founder Jeff Bezos owns The Post.) He did not respond to a follow-up request specifically on his work with Starlink in Ukraine.
Geofence warrants surge in popularity but can violate rights, judges say
Two judges recently raised concerns about the warrants, in which prosecutors ask companies like Google for a list of devices that were active in a geographic area, Justin Jouvenal and Rachel Weiner report.
“The rulings are likely to reverberate across Virginia and the nation as a debate over the legality of geofence warrants intensifies with their proliferation,” Justin and Rachel write. “A handful of other federal magistrate judges have turned down applications for geofence warrants, but in the vast majority of cases, they have been approved with few questions until now.”
U.S. District Judge M. Hannah Lauck called on lawmakers to take up geofencing warrants. Her ruling could be far-reaching because she has spent two years examining the warrants. Lawmakers should address Lauck’s “deep concern … that current Fourth Amendment doctrine may be materially lagging behind technological innovations,” the judge wrote.
Judge dismisses D.C. attorney general’s antitrust suit against Amazon
D.C. Attorney General Karl A. Racine’s (D) office disagrees with Superior Court Judge Hiram E. Puig-Lugo’s ruling and is considering its legal options, a spokeswoman told the New York Times’s David McCabe. Racine filed the case in May, arguing that Amazon prevented third-party sellers from offering their products at lower prices on other platforms.
Court records didn’t say why Puig-Lugo granted Amazon’s motion to dismiss the case. The company didn’t respond to the Times’s request for comment.
Brazil’s Supreme Court briefly banned Telegram before the app responded and the court lifted the ban, the New York Times’s Jack Nicas and André Spigariol report.
Telegram chief executive Pavel Durov’s apology and excuse for not responding to the court sooner was a head-scratcher: the company didn’t check its general mailbox for the emails, he said, assuming that the court would instead send notices to a special mailbox. Harvard Law School lecturer Evelyn Douek:
Risky Business’s Catalin Cimpanu: