Skip to content

Big Tech's Safety Pretext To Limit Speech

The remedy for speech that is false is speech that is true.

cover

The Cambridge dictionary defines safety as a "state in which - or a place where - you are safe and not in danger or at risk." It then provides several examples of how the word is used.

As the influence of Big Tech has grown, the industry has conveniently appointed itself as the self-arbiter and uber-promoter of safety. Such aggrandizement is a classic case of hype and overreach because Big Tech does not physically endanger anyone's safety. It doesn't create products (garden tools) that could accidentally harm people. Nor does it act to intentionally hurt people, such as a terrorist setting off his explosive vest. Nor does it have the regulatory responsibilities of government agencies - the FAA, FDA, OSHA - which exist to implement laws to improve safety.

Big Tech merely operates platforms that connect people. The industry is no different from a city that runs a public park, street, or beach. Millions of conversations happen in the public square between people who don't know one another. City officials are not involved in these interactions, nor do they monitor them. People would vehemently protest if indeed they did.

Yet, Big Tech is now on an aggressive campaign to step up its role in promoting safety and expression, although it is merely a third-party conversation enabler. Meta, formerly Facebook, has unleashed full-page online ads that begin with a bold headline: "We are committed to protecting your voice and helping you connect and share safely." The effort is so vast that it resembles the operation of a state police organization in a dictatorship. All enforcement action falls under their vague "Community Standards" or "Terms of Service," which, like in a dictatorship, are not subject to debate or deliberation by the subjects over which the platforms exert control.

Meta insists that it consults "experts" around the world to review and regularly update its community standards. We have a right to know who these experts are and how they deliberate. Like on Congressional committees, are experts staffed to represent views from both sides of the aisle? What happens when experts disagree? Who breaks a tie? How often is a standard revisited?

Meta relies on third-party "fact-checkers" who are non-partisan and independent to identify false news, rate its accuracy, and label misinformation. But the people involved have questionable motivations. In December 2020, a Sky News Australia investigation uncovered disturbing evidence of political bias and lack of accountability at the top levels of the certification process that approves fact-checkers that Big Tech hires. Consider Margot Susca, an assistant professor in American University's journalism division, who assesses American news organizations for the International Fact-Checking Network (IFCN). The Sky News report found Margot Susca to be "unashamedly politically biased."

Meta employs 40,000 people to screen content that meets its standards for acceptable speech, employees whose only job is to cleanse what the rest of us are saying. We erupted in anger at the NSA's warrantless wiretapping programs post 9-11 to snoop in on calls involving U.S. citizens suspected of terrorist links. Yet, we let Meta's workers in dark rooms spy on everything that we say.

And then, there's the content that even their screeners don't see. In 2019, Instagram, part of Meta, rolled out a creepy feature powered by Artificial Intelligence that notifies people when their comment may be considered offensive before it's posted. Using AI can be unethical and counter-productive in pushing bias and discrimination, yet, it has not stopped Meta, which proudly says, "We detect the majority of the content we remove before anyone reports it to us." Michael Sandel, a Harvard professor, wonders, "Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?"

All of this brings us to a fundamental point raised by Supreme Court Justice Louis Brandeis 85 years ago. In his concurring opinion in Whitney v. California (1927), he observed: "If there be time to expose through discussion the falsehood and fallacies ... the remedy to be applied is more speech, not enforced silence." Referring to this simple and elegant approach, former Supreme Court Justice Anthony Kennedy said in United States v. Alvarez that "the remedy for speech that is false is speech that is true," calling it "the ordinary course in a free society."

Rather than censor "false" speech through an elaborate partisan bureaucracy under the excuse of promoting safety, Meta could hire people to counter-post speech that is "true." And let us decide which is which. Problem solved.


TIPP Data

Golden/TIPP Poll Results: Most americans disapprove of social media suspending users who express opposing views on COVID19
Golden/TIPP Poll Results: Most americans disapprove of social media suspending/canceling political content

📧
We welcome readers' letters via email. If your letter is published, you get to ask a question in the TIPP Poll for free.
Please email editor-tippinsights@technometrica.com

Share on Facebook       Share on Twitter

Please share with anyone who would benefit from the tippinsights newsletter. Please direct them to the sign-up page at:

https://tippinsights.com/newsletter-sign-up/

Latest