January 11, 2018
Source: Bigstock
NEW YORK—Every day somebody howls for the shutdown of a website, the squelching of a Twitter account, the nuking of a Facebook page, the removal of a video or a screed or a manifesto, all in the name of…uh…well, it depends, but mostly in the name of saving the world.
We’ve become a nation of scolds and censors and digital night-riders, trying to get people removed from the internet.
Many of these modern-day bowdlerizers are the same people crowing about “net neutrality,” whatever that is (and believe me, I’ve tried to figure it out), and demanding that Disney give up the copyright to Mickey Mouse because, after all, everything should be free and available and easy to find.
Well, everything except hate speech. Whatever that is.
And everything that jihadists post. Assuming we can agree on who’s a jihadist and who’s not.
And everything that white supremacists do.
I’m gonna digress here for a moment and suggest that we retire the term “supremacist” altogether, because it’s being used now to describe not just the Ku Klux Klan, but anyone who’s simply proud of his heritage. Do we really want the websites of the ladies who put on the annual Hungarian Festival in New Brunswick, New Jersey, to be shut down because their dancing in dirndls and serving goulash and strudel to the public are flagged by some “content mediator” who decides they’re promoting white supremacy? Because that’s where it ends up if you keep forcing platform owners to “police” content.
We shouldn’t be policing any content.
This is the deal we made in 1996, so why are we changing it now?
Let’s review.
The first lawsuit that tried to squelch an internet platform was Cubby v. CompuServe in 1991. A guy named Robert Blanchard, owner of Cubby Inc., claimed that his online newsletter was libeled in an electronic forum called “Rumorville” and that CompuServe, the service provider, should be held accountable, just as a newspaper or book publisher is held accountable for what they publish.
CompuServe’s defense was interesting. They said, “We don’t even read those electronic forums. We have 150 of them and people are free to post anything they want.”
Strangely, this argument carried the day. The New York judge ruled that CompuServe was a mere distributor of information and had no responsibility for the creation of content—case dismissed.
But then another court ruled exactly the opposite. In Stratton Oakmont v. Prodigy, Danny Porush sued because his Long Island investment bank had been accused of criminal acts and fraud by an anonymous poster on Prodigy’s “Money Talk” bulletin board. Prodigy argued that they weren’t responsible, but the New York Supreme Court ruled in 1995 that Prodigy acted as a publisher, not a distributor, for three reasons:
Prodigy had “content guidelines,” and the anonymous post violated those guidelines.
Prodigy had “board leaders” who were supposed to enforce the rules.
Prodigy used software programmed to get rid of offensive language—evidence that they were editors and publishers, not simply distributors.
So now we had a legal situation where if you police content you’re a publisher, and so you’re subject to the same rules that apply to Doubleday or Esquire or The Washington Post.
If you pay no attention to what’s on your platform, then you’re cool. You’re protected.
Obviously this couldn’t last, so in 1996 Congress set out the rules in a single sentence of the Communications Decency Act:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Sonny Bono was behind this. The congressman from El Lay told Silicon Valley, “Hey, I got you, babe”—and solved their problem. I think it was the wrong decision. I think internet platforms should have been held to the same standard as all owners of printing presses from the beginning of time.
But that’s not the system we have. As a result of that statute, The New York Times still has to be careful: They can be sued for libel even if they’re just printing a letter to the editor. But Facebook, Google, Twitter, YouTube, or anybody running what came to be known as a “platform” for user content doesn’t have to worry about it.
Until now.
Suddenly everyone is hammering Mark Zuckerberg for not policing his pages. They want to know why he had all those fake Russian pages, and why there are ads for Pepsi next to graphic video of beheadings. Google is getting skewered for YouTube videos classified as hate speech and incitements to violence. Twitter gets daily demands to close the account of the president for his personal insults and—the ultimate incitement to violence—threats to wipe North Korea off the face of the earth.
What’s odd to me is the reaction of the CEOs. They don’t have to do anything. They’re protected. But they make constant promises to do a better job.
A better job of what?
Why should they monitor whether a page or a video comes from Russia or not? All these platforms are international. They welcome all languages, all nationalities, all countries. When the Pentagon opened the internet to the public in 1992, that was supposedly the purpose—to make it an international means of communication, not the tool of a single nation. If a militant ayatollah starts using Twitter, is the Twitter CEO supposed to treat him different from Trump? The last thing we need is for Facebook, YouTube, and Twitter to start acting like pawns of the U.S. government.