Unmasking Meta’s Misleading Fact Checks: Section 230 Publisher Liability and Online Freedom
When I posted a copy about Meta’s liability under Section 230, strange things started happening with my Instagram account. All will be revealed below. Meta thinks it can avoid civil liability, even if what it says about your content is false and even if what they say is designed to directly compete with your content financially.
So far, it has seemingly steamrolled many judges, with help from plaintiffs’ lawyers who CLEARLY don’t understand internet platforms, including social media platforms and their original functions. Originally, Section 230 of the Communications Decency Act was designed to protect companies like Meta if they were to restrict access to “harmful content,” aka pornographic content, death threats with intent to kill, etc.
But with help from instrumentality influence in the FBI and other agencies, these internet services now rate, review, and restrict third-party content and even augment. They do so using an ABSURD and poor interpretation of Section 230 to escape and evade legal liability for fraud, defamation, and other civil and quasi-criminal acts.
Tech companies rely on revolving door US government connections and law clerks to steer judges, many of whom admittedly are not internet law experts. As discussed here, it’s like the wild West for billionaire monopolists, who appear to have de facto control of both political parties and many US regulatory agencies. Most of the cases brought have been dismissed on technicalities, making most consumer protection lawyers shy away, always seeking the lower-hanging fruit.
No one wants to face a federal judge when the other side has billions in defense funds and the ability to destroy the same judge online with an army of bots and fact-checkers. Because of this, some states, including a new Florida and Texas law, are trying to force the original intent of Section 230 at a state level. In other words, if Meta thinks it can choose to create, alter, or mislabel content as opposed to providing users a way to remove or block “smut” (like X does), these state courts won’t give them the same warm reception that Facebook has allegedly been getting in the Northern District of California.
Such content created by others is protected as free speech from the government (you can’t sue the platform for defamation for what another person said or did online.) But the now drunk with power and arrogant Meta thinks it can censor anything it wants and not be held accountable. A law that was passed in 1996 to protect users from smut is now used as a bludgeon to batter users with false, misleading, and often anti-competitive content.
Now, a person like former President Trump, your family members, or a parent complaining about school board censorship can be readily destroyed by Meta’s equivalent of the “thought police.” We know this was never the intent of Section 230, not by a longshot. Just because a few lower courts and the Ninth Circuit got it wrong doesn’t mean it’s right. However, no executive order can fix this, and Meta and its co-conspirator instrumentalities in its revolving door govt employment scheme should be held liable.
Lawyers of the world must unite before data privacy, and everything else about honest people are canceled in favor of websites and services that are nothing less or more than state actors and instrumentalities working against We the People.
Have you ever seen a post flagged on Meta’s platforms with a warning about ‘false news’ or ‘misinformation’?
Of course, you have. And it might have read something like this:
“This post was flagged as part of Meta’s efforts to combat false news and misinformation on its News Feed. (Read more about our partnership with Meta, which owns Facebook and Instagram.)”
When you see this travesty, what’s your first thought? Do you accept the fact check at face value or start questioning the validity of the labeling process? If you’re in the latter category, you aren’t alone, especially with YouTube algorithms. Like many others, you might have picked up the scent of something that might not be as it seems, something more like an Orwellian twist where slavery is freedom.
In this article, we will help you uncover the layers of deception beneath Meta’s
“fact-checking” and how this relates to the legal shield of the Content Decency Act, referred to as Section 230.
Let’s Take a Look at a Deceptive Fact Check and Punitive Action Taken By Internet Platforms Insta-Facebook-Meta
Within several hours of posting my three-part Sue Meta Under Section 230 series on Instagram under the user @themichaelehline, I was notified that my account had been throttled for 90 days. The catch is, like other complaints I am hearing, Meta is targeting older posts but achieving the same result: banning my account while appearing perfectly fair.
Other users who followed me or shared the content also received a warning that they would be punished as well. Several users immediately unfollowed me, and an attorney friend for almost 15 years accused me of “disinfo” since there was a “fact check.” Meta’s action here has severely damaged my reputation.
Note That I am PUNISHED for an Old Post, Not the Post Hostile to Meta.
So I wanted to expose just how ridiculous the fact checks are, to begin with, and prove beyond a shadow of a doubt that Facebook and most social sites besides X are pushing a self-serving agenda, making them just as liable as any other publisher or purveyor of false, misleading or defamatory information published by one person about another.
In my experience, within hours of me posting videos about Section 230 and the unfair way social media companies have escaped its proper enforcement, an ancient post of mine was flagged as “false.” Of course, my account was throttled.
One of the videos I posted on Instagram started my fall from grace.
SECTION 230 Part 3 video with Fyk.
As you can see, rather than outright remove my videos, all of a sudden, Meta moved to find my account “in violation” of its bullshit policies that can be interpreted ANY WAY Meta wants while receiving US government protections under Section 230. Watch Part 3 to get an idea of why.
Example for The Fake and Misleading FACT CHECK:
Our Post, a Parody, Says, “Awake Yet?”
It pokes fun at many posts over the years and anecdotal doomsayers but NEVER mentions the word “scientists,” etc. It’s having fun about taxes going up and predictions about doomsday being exaggerated.
Of course, since Meta has taken it upon itself to decide what the truth is and isn’t, as well as what reality is or isn’t, they went ahead and “hired” their surrogate, or “instrumentality,” in this case, the Democrat fringe group, ClimateFedback.Org.
Here is the title of their “Independent Fact Check.”
“Scientists didn’t announce impending environmental catastrophes every decade since the 1970s.”
As you can see, nothing in the image says anything about scientists. It’s clear that Meta and the current US administration want to create a false impression of scientific consensus, as they did during the pandemic by silencing at least one Nobel Laureate who disagreed with mRNA tech to treat viruses as “fringe.” So much so that they assumed facts that were not in evidence to create a strike against my user account. Their appeals process is equally absurd.
This is improper. As soon as Meta enters the business of thought policing, its goals, intent, and everything else are called into question. It can not claim it is not a publisher under Section 230(c)(1), let alone pretend its motives as a “Good Samaritan” are free from judicial or citizen oversight.
Let’s get into this a little more. First off, the fact check labels are designed to and DO disparage and block users who share it ideally fits the descriptions of unfair business practices, as well as false and deceptive business practices, NOT just defamation, as will be discussed.
META: More Than Just A Platform – Communications Decency Act?
Let’s begin by understanding the essence of the issue. Section 230, or 47 U.S.C. § 230, is a provision in the Communications Decency Act 1996. It’s purpose? To protect online platforms from liability for content posted by their users (Originally kiddie porn and adult porn from being seen by kids.) It was assumed social media was acting as a Good Samaritan to protect the public from “smut.” But if it did take action, ANY action other than providing users a block button, for example, Meta’s Good Faith, was always at issue. Getting this so far?
Online Sex Trafficking Act, Etc.
Both lawmakers and presidents started growing weary of Section 230 and internet platforms, especially the one with hard-core political activist Yoel Roth in charge of “trust” and child sexual abuse material. In 2018, two significant pieces of legislation were passed – the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA).
Effect of Child Sexual Abuse Material Laws?
These laws altered aspects of Section 230, implying that platforms can now be held responsible for advertisements about prostitution posted by third parties. The primary objective of these changes was to grant authorities a more accessible pathway to prosecute and control these activities.
But as Meta and social media strengthened their revolving door partnership with the FBI and other US cabinet-level agencies, it appears that smut is now anything one political party or platform doesn’t like when it does not serve their financial or political interests.
Put, as a matter of law, Section 230 treats Meta not as a publisher or speaker but merely as a platform hosting user-generated content. This means they’re ONLY SUPPOSED TO BE shielded from legal ramifications from their users’ actions.
This seems reasonable until you peel back the layers and see Meta’s actions in play in harming users they disagree with politically or compete with financially. You know, they’ve taken action whenever Meta uses their perceived protections under Section 230 to label, classify, or unfairly compete with a content creator. They transform themselves from being a passive, interactive computer service/provider content provider (a passive platform) to an information content provider (an active player). Meta is now promoting one user or their content over another, for better or worse.
“Actions from Meta can result in a triable issue of material fact where motives, including bias, monetary, or political motives, can be questioned.”
So, it seems we have much more than just a platform to scrutinize. It may be time to reexamine Meta’s role and the use (or misuse) of Section 230.
Now that we’ve peeled back some initial layers of this issue, let’s dig deeper into what exactly transpires when you see a post flagged by Meta.
You might notice a notification stating: “This post was flagged as part of Meta’s efforts to combat false news and misinformation on its News Feed.”
The first reaction might be to trust the fact check implicitly, right? But is everything as it seems?
Consider this: sometimes, a post is labeled as false or misleading and has no semblance to the original fact check conducted. It’s bizarre. But it’s more than just odd—it feels a bit like manipulation. The fact checker presumes specific facts, not even discussed in the original meme or post, labels it as false, and then curbs the account of the person who posted it.
Does this strike a chord with historical instances of censorship, such as those exhibited by the Nazis, KGB, or Stasi?
Yet Meta attempts to deflect any backlash or legal repercussions, using Section 230 as a shield. They argue that fact-checkers are independent entities despite being employed by Meta. They claim this allows them to introduce a layer of objectivity to the fact-checking process. But can this claim hold water when such fact-checkers have the power (given them by the all-powerful Zuck) to suppress content and restrict accounts?
In light of such behavior, the line between being a neutral content platform and a content provider isn’t just blurred—it becomes almost invisible. How so? Well, Meta doesn’t just provide the platform for users’ content. It also assumes the role of a user on its platform and employs “independent” fact checkers, many of whom are far-left organizations aligned heavily with Zuck’s political viewpoints.
Meta can now influence viewer sentiment and control what information goes public – a power far beyond that of a mere content platform. Public schools, especially in California, may soon use these biased absurdities as official facts and reasons to trust or distrust someone. (See Newsom’s Section 587.)
Case in point: Jason Fyk’s Section 230 videos. While sharing his views on Meta’s content manipulation, Fyk uncovered an apparent complex web of deceptive practices by the social media behemoth. From ‘shadow-banning’ to misinterpretation of facts, Fyk’s videos expose Meta’s actions that definitely raise eyebrows for anyone advocating for transparency and freedom of speech. In his case, his hundred-million-dollar PLUS company competed with Meta for paid ad space that he was generating organically.
Meta took down his millions of followers, destroying his online presence. Ultimately, after Fyk transferred the rights to his content to a paying competitor, Meta re-hosted the content, even though it allegedly violated the Meta Terms of Service.
Many of Meta’s advertisement-supported business models rely on user engagement. Hence, meta-algorithms often promote false, divisive, and harmful content to their users. In this case, their entire fact-checker process is clearly deceptive and designed to portray many publishers and users in a false light.
Meta Is Backdooring
I agree that 230(c)(1) was used as a backdoor for 230(c)(2) cases like Jason Fyk’s. Judge Alsup’s recent opinion below proved Fyk correct, but he still got blown out, and Meta is still free to destroy lives (in my opinion).
What is False Light Defamation?
False light defamation occurs when someone is portrayed misleadingly or falsely in a way that could be offensive or objectionable to a reasonable person, even if the information itself is factually accurate. These fact-check labels do just that. Even if the labels were correct, Meta has become a publisher, and the propriety-GOOD FAITH-of its actions in removing “otherwise objectionable content” must now be decided by a trier of fact under Section 230(c)(2)(A). In other words, META does not get to settle allegations of bad faith, “action voluntarily taken in good faith,” the JURY does!
So, what does this all mean?
It’s time to question:
Is Section 230, a law put forth to protect freedom of speech on online platforms, being weaponized to serve as a tool for misinformation and bias? Your thoughts matter in this debate. Is it high time we called for more accountability from such platforms?
One thing’s for sure: This exploration has only just begun. With a court unimpressed by tautologies and shiny objects, Meta will soon be out of the unfair competition business and back into its role as a social media content provider platform. Their job is not to label and restrict communications using the subterfuge of independent fact-checkers, either.
X/Elon Musk Got it right with Community Notes.
X uses “Community Notes” to afford protection under Section 230. Community Notes are harmonious with Section 230(c)(2)(B), which states:
“(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]“
Most people feel that as long as social media sites take censorship actions that favor President Biden and his son and also take actions against his political opponents, only a US court can right these wrongs. Meta is unilaterally TAKING PUBLISHER actions itself and dismissing lawsuits at whim. The revolving door employment scheme it has fostered with DOD, FBI, and even CIA demonstrates a pattern and probable goal of undue influence over policymaking that must be investigated.
Meta is supposed “…to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]“
In other words, Meta is not supposed to defame its political and financial competitors as fake, false, or misleading and then get protected for lying and unfair business practices. It is believed to allow USERS and information content providers to MUTE or offer a block button (technical means) or a chalkboard to share notes! All it will take is one good judge to end these tautological shenanigans with technology website companies like Meta. Either way, Fyk lost his case, and it could just be his lawyers made the wrong arguments, as did the lawyers in the Stossel case, by stipulating Meta’s definitions as the rule of the case. Either way, I have no skin in the game, and I DO NOT handle these cases, nor have I ever discussed the case with Fyk’s legal team.
Are you ready to file a lawsuit? Make sure you are ready!
“Thomas Jefferson complained about the verbosity of statutes, their endless tautologies, and “their multiplied efforts at certainty by saids and aforesaid.” – Source LibQuotes.
Are you ready for a favorable ruling? Please like, subscribe to, and follow us on the social media platforms that have not banned us yet. We look forward to your communications and discussing any new rules, appeals, or lawsuits.