Justice Department Outlines Broad Overhaul of Social Media Legal Protections
The Justice Department on Wednesday outlined a broad overhaul of legal protections for online platforms such as Alphabet Inc.’s Google and Facebook Inc. if they deliberately promote illegal speech on their websites.
The proposals, which could upend the companies’ business models, would also have the effect of limiting their discretion over removing political posts and take away liability protection for encrypted platforms such as FaceBook’s WhatsApp.
The recommendations for legislation follow a feud between President Donald Trump and Twitter Inc., which last month slapped fact-checks on some of his tweets, prompting him to issue an executive order aimed at narrowing the liability shield enjoyed by social-media companies. Trump and his supporters contend they’re treated unfairly when their assertions are challenged or blocked by the internet platforms.
The companies enjoy immunity from lawsuits over the content that their users post under Section 230 of the Communications Decency Act of 1996, a key measure that allowed online companies to flourish in the early days of the internet. Now the provision has become a target of lawmakers from both parties who object to its breadth and describe it as a giveaway to technology companies.
The proposal would limit the shield when platforms “purposefully promote, solicit, or facilitate the posting of material that the platform knew or had reason to believe would violate federal criminal law,” according to a Justice Department statement. That would include when they receive notice from users or other third parties that the content could be illegal.
The proposed measures would also let victims file lawsuits in cases involving online child exploitation, terrorism and stalking. They also call for removing immunity entirely if companies can’t identify illegal content and assist in investigating it. Tech companies and civil liberties advocates have said that would hurt services that use end-to-end encryption because finding and tracking such content would be impossible.
While expanding the platforms’ responsibility for content, it would also remove their ability to take down content deemed “objectionable,” which some conservatives have said allows them to silence conservative voices.
Industry’s Response
Tech company allies slammed reports of the proposal on Wednesday. Jon Berroya, interim president of the industry’s Internet Association trade group, said in a statement that the Justice Department’s proposal “will make it harder, not easier, for online platforms to make their platforms safe.”
“The threat of litigation for every content moderation decision would hamper IA member companies’ ability to set and enforce community guidelines,” said Berroya, whose group counts Twitter, Facebook and Google as members.
In addition to offering liability protection for the content that companies leave up on their sites, Section 230 also allows the companies to remove content or limit its visibility without facing civil liability so long as they act “in good faith.”
Tech companies maintain that the shield protects free speech online by encouraging them to leave up controversial content, while also allowing them to take down the most objectionable posts — in essence permitting platforms to let content flourish unimpeded or to police it carefully, as they see fit.
The companies have argued against almost all changes to the law, saying they would upset this balance and either threaten free speech and innovation on the one hand or limit their ability to take down objectionable content on the other. They have said that liability should attach to speakers, not electronic conduits, and say that their core business models would be at risk if they are forced to face what could amount to billions of lawsuits.
“The Trump administration has said we have censored too much content and Democrats and civil rights groups are saying that we aren’t taking down enough,” Facebook said in a statement. “Section 230 allows us to focus on what matters most: fighting harmful content while protecting political speech.”
Worst Excesses
Lawmakers from both parties and critics of the law increasingly say it excuses the tech platforms’ worst excesses, with liberals generally arguing for more moderation of election misinformation and racist content and conservatives hoping to change the law to encourage companies to leave up more right-wing voices. The companies deny their actions are biased.
Both sides have also slammed the tech companies for what they say is their failure to police drug sales, online child sexual abuse and other ills, saying that it’s cheaper for them to ignore the problems. They have also criticized language allowing the shield in U.S. trade agreements.
While Section 230 has no bearing on enforcing criminal law against the platforms themselves, it’s largely silent on how the companies should act with regard to users who are breaking the law. The recommendations to limit Section 230’s protections when companies purposefully facilitate or solicit third-party content that violates federal law, for instance, would clarify that relationship in a way that expands the platforms’ legal responsibility and exposes them to more lawsuits.
Courts have generally had to find that platforms contributed materially to illegal content before treating them as responsible for it — such as a website that edited posts to introduce or amplify defamation.
Websites that were set up to attract illegal activity have benefited from the shield, although in 2018, a broad bipartisan majority of lawmakers passed a law removing the protections for companies that knowingly facilitate sex trafficking.
The Justice Department’s proposal also seeks to ensure Section 230 wouldn’t impair federal civil enforcement, including much of the antitrust and consumer protection law overseen by the U.S. Federal Trade Commission, although it doesn’t cite examples where the defense has been used successfully.