AI Will Censor Speech At Scale, Bias Included

Until recently, many efforts to censor and suppress speech have required manual labor; human beings have been tasked to put their eyeballs on the page and then decide what stuff gets to remain. In the good old days, books were banned this way. Now, those eyeballs are turned toward the virtual spaces online, an environment that is much more unwieldy to monitor and control. Not only is the information on the internet copious, but it seems that some of it is too independent-minded for the government to accept. Unfortunately, the government and its elitist NGO partners have struggled to manage the load. It turns out they have neither the resources nor the manpower to shut down the naughty, freethinking rascals that roam the internet where the virtual resistance hangs out.

It is now brutally evident from a recent National Science Foundation (NSF) Interim Staff Report that the Biden administration and its minions are deeply offended by the "democratization of speech" that has proliferated on the internet. Good intentions often produce unintended consequences. In this case, there is just too much free speech going on. With the advent of machine learning, the government will now be able to control speech using Artificial Intelligence (AI). The House Judiciary Committee and the Select Subcommittee on the Weaponization of the Federal Government have obtained "non-public documents" proving the NSF is issuing grant money to "university and non-profit research teams" to develop automated speech intervention at scale using AI. The Judiciary Committee believes the move to use automation to censor speech will violate civil liberties in ways previously unseen. The Judiciary writes in its Feb. 5 report:

"As egregious as these violations of the First Amendment are, each still faced the same limitation: the censors were human. Senior Biden White House officials had to spend time personally berating the social media companies into changing their content moderation policies. Social media executives expended considerable time and effort responding to the White House's threats and evaluating the flagged content. Stanford had nearly a hundred people working for the EIP in shifts flagging thousands of posts, which was only a fraction of the number of election-related posts made in the fall of 2020."

"But what happens if the censorship is automated and the censors are machines? There is no need for shifts or huge teams of people to identify and flag problematic online speech. AIdriven tools can monitor online speech at a scale that would far outmatch even the largest team of "disinformation" bureaucrats and researchers. This interim report reveals how NSF is using American taxpayer dollars to fund the tools that could usher in an even greater threat to online speech than the original efforts to censor speech on social media. The NSF-funded projects threaten to help create a censorship regime that could significantly impede the fundamental First Amendment rights of millions of Americans and potentially do so in a manner that is instantaneous and largely invisible to its victims."

Other countries like China and Russia have been surveilling and censoring on a larger scale for a while with robust surveillance systems in both the real and virtual worlds. There has been little pretense in their mission to be the gatekeepers of their respective dominions. Many other countries do the same without hesitation or shame. However, Westerners, especially Americans, have been slower to engage in overt, scaled surveillance. Maybe it is because ruling with an iron fist has long been frowned upon, especially when censoring alternative opinions. The Iron Fist doesn't play well in America. After all, America has been known to be the freest country on the planet. We have been graced with forefathers who were remarkably thoughtful and forward-thinking in their vision. The First Amendment affords us some of the most significant protections any culture has ever enjoyed. We may have taken it for granted, but it is a precious and inspiring gift. 

In part because of our birthright, we are an unruly bunch. While disturbing, the standard means to control our thoughts and opinions have been clumsy at best. It doesn't matter how many elite institutions and NGOs our government employs to launder its dirty work; our government and those institutions have been limited by a slow and clumsy process involving human labor. To solve that burdensome challenge, it seems our government is enlisting every possible resource, including American tax dollars, to automate the censorship of speech using Artificial Intelligence (AI). It seems there are bright opportunities to suppress and censor speech at scale.

According to Monday's Judiciary Report, the NSF has embraced the idea of machine-generated censorship. This activity will occur in ways people will never fully comprehend or notice. The process will be both reactive and proactive, curating information at the behest of "a small and isolated coterie of partisan social engineers" programming machines to do it. In many cases, they are tasked with identifying and coding into oblivion all kinds of speech that do not comport with the tyrants at the helm. Much of it has already been set in motion.

According to the report, Marc Andreessen, co-creator of Mosaic, a graphical browser and co-founder of Netscape, "warned that the 'level of censorship pressure that's coming for AI and the resulting backlash will define the next century of civilization." Even more chilling than those words is his belief that AI will save the world. According to the Committee Report, Andreessen wrote a June 2023 article entitled "Why AI Will Save the World." Andreessen wrote AI could be "A way to make everything we care about better." He then continues, "If you don't agree with the prevailing niche morality that is being imposed on both social media and AI via ever-intensifying speech codes, you should also realize that the fight over what AI is allowed to say/generate will be even more important – by a lot – than the fight over social media censorship."

Andreessen and those like him believe AI is one of the most critical developments for mankind. The opportunities are endless. "The stakes here are high," says Andreessen, "The opportunities are profound. AI is possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips and probably beyond those. The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future."

But then Andreessen goes on to write the following thought, quoted below. In the report, the quote was incomplete and, therefore, seemed to be contradictory. However, the original article shows a critical but missing last sentence. His statement is almost certainly not contradictory because his idealistic self believes that AI can be an objective gatekeeper– as if it would ever be unfettered by the whims of a human brain. However, given what we have witnessed thus far with regard to the "moderation of speech," the technocrats and their flocks of coders have been terminally incapable of keeping their opinions of what constitutes acceptable speech to themselves. And even Andreessen says below these "scientists" are "social engineers."

Andreessen's quote in its entirety is written below:

"AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you."

"In short, don't let the thought police suppress AI."
 

From what I can tell, with the evidence I have seen, many of these social engineers live to create the invisible prison walls within which we will be allowed to operate. Therefore, Andreessen's idealism makes his relationship with reality very poor. There is no way social engineers will not be the thought police. As evidenced in the Judiciary report, these social engineers are and will bring their biases to the very projects that will most likely set in accelerated motion the kind of censorship one reads about in dystopian novels. 

The Weaponization of the National Science Foundation: Automated Tools to Censor Online Speech "At Scale"


Thus, it is for good reason the Judiciary has a much more pessimistic view of AI's role in the moderation of speech. First, as mentioned in the report, multiple government agencies have already been busy funding various university research groups and NGOs to help control the narrative. The Twitter Files and reporting from numerous independent news outlets, including UncoverDC, have presented unequivocal evidence of government involvement and coordination with censorship activities. We continue to see content removed or tagged on social media platforms, pressured by government agencies, or their biased algorithmic programs. Google also does its fair share of gatekeeping, as reported by UncoverDC. Facebook continues to remove content it regards as mis- or disinformation. "J6: A True Timeline", a film that is almost purely unedited video content and security footage from the protest, was scrubbed from Facebook this week.

In late January, @fentasyl on "x" showed the way Microsoft's ToxiGen is programmatically defining posts against illegal immigration as "hate speech." The tool is "used universally across the industry for fine-tuning models," as seen in the Tweet below:



Another Twitter user, @BasedBeffJezos, shared an AP article dated Jan. 29, 2024, stating lobbyists for Big Tech are now selling "AI Safety as a service" to the U.S. government. The article states, "The Biden administration will start implementing a new requirement for the developers of major artificial intelligence systems to disclose their safety test results to the government. The White House AI Council is scheduled to meet Monday to review progress made on the executive order that President Joe Biden signed three months ago to manage the fast-evolving technology."

We already know about Stanford's Election Integrity Partnership (EIP), which was created at the request of the DHS and CISA. That partnership worked to flag online speech related to the 2020 election. We have already found evidence of the Biden White House "directly coercing large social media companies, such as Facebook, to censor true information, memes, and satire, eventually leading Facebook to change its content moderation policies," as reported by the Judiciary. And we now know the Federal Trade Commission (FTC) has harassed "Elon Musk's Twitter (now X) because of Musk's commitment to free speech, even going so far as to target certain journalists by name," according to the report. To be honest, the partnerships are too numerous to list. 

However, this Feb. 5, 2024, report focuses on how the NSF has funded "AI-powered censorship and propaganda tools" and attempted to "hide its actions to avoid political and media scrutiny." NSF has been issuing millions in federal grants to its partners to "develop artificial intelligence (AI)- powered censorship and propaganda tools that can be used by governments and Big Tech. The aim is to "shape public opinion by restricting certain viewpoints or promoting others," according to the Judiciary report. These are taxpayer-funded projects that are allegedly already being weaponized in one way or another to limit our free speech. The partners include the University of Michigan's AI-powered WiseDex tool, Meedan with its Co-Insights tool, The University of Wisconsin's CourseCorrect, and MIT's Search Lit. These censorship tools represent state-of-the-art software that would instantaneously identify the types of speech biased humans program the software to eliminate. 



One of the non-profits, Meedan, proposed that NSF grant money would be used to "to build software and run training and programs "to counter misinformation online" and "advance the state-of-art in misinformation research." Moreover, Meedan would "leverage its relationships and experience with WhatsApp, Telegram, and Signal," all supposedly the more secure avenues of communications for users. The goal would be to "proactively proactively identify and limit susceptibility to misinformation and pseudoscientific information online." Meedan's efforts included "crawling the open web to identify controversy to find content for its fact-checking." It would use advanced tools to inform its "misinformation interventions" machine learning for pre-emptive explainers, community tiplines, data mining and donation, and data sharing. Meedan currently uses AI to "monitor 750,000 blogs and media articles daily" and mines data from all major social media platforms to look for "common misinformation narratives" with a focus on minorities. 

Scott Hale, Meedan's Director of Research, emailed Track F's project manager, Michael Pozmantier, at NSF to share how excited he is to mine data to control "hate speech and radicalization" with a press of a button. In his "dream world," he would be able to "run code on remote data sets without ever having direct access to the data," pictured in a Nov. 17, 2022 email below:


The NSF ultimately awarded Meedan $5.75 to work on its behalf in Phase 1 and another $5 million in Phase 11. Meedan's Co-Insights tools went through numerous iterations of names. "Fact-checker, Academic, Community-Collaboration Tools: Combatting Hate, Abuse and Misinformation with Minority-led Partnerships" were among its prior names. Co-Insight focuses on using "data and machine learning to identify, preempt, and respond to misinformation in minoritized [sic] communities." 

The University of Michigan pitched its request to the NSF Convergence Accelerator team in an email. An Oct. 26, 2021 email from "Team 469" promoted its tools "as a way for policymakers at platforms" to "externalize the difficult responsibility of censorship." In other words, we will help Big Tech launder censorship so that no one questions the shadiness of your content moderation activity. Our work, they say, will help platforms "get good PR for their actions on misinformation by having a clear benchmark for outcomes and eliminating the need to defend internal procedures." Keep in mind here that U of M researchers artfully leave out the part where our government also slips under the radar because of their involvement, hiding in the background one degree of separation removed. Michigan received $750,000 for its Phase 1 funding in late September 2021. The report shows how they all talk to each other in private, which isn't a good look.



 

The investigation of the "pseudo-scientists," the term used by the Judiciary to describe the developers of these tools, reveals these NSF-funded censors are "partisan and condescending." They believe the American public "is not smart enough to discern fact from fiction, especially conservatives, minorities, and veterans." The report also shows that misinformation experts like Kate Starbird have "acknowledged in an unpublished proposal that "working to counter disinformation is inherently political and is itself a form of censorship." According to the report, these "pseudo scientists" know they have leverage over social media companies "to ensure the platforms bow to their demands." And in July 2023, "when an employee at Twitter refused to issue a refund to a Wisconsin CourseCorrect researcher based on his request to cancel a service upgrade on Twitter, the Wisconsin researcher sent an email threatening to publicize 'our terrible treatment with thousands of researchers to discourage their use of your products.'"

Among the findings of the Committee are the following:

  • NSF is trying to cover up its funding of AI censorship
  • NSF developed an official media strategy to hide its Track F Censorship program from the American people. Track F is tasked to research "misinformation"
  • NSF has repeatedly stonewalled Congressional investigations
  • NSF considered blacklisting Conservative media outlets

Sadly it took a head researcher at the WiseDex project to explain to the NSF in an email that it "would be bad optics for the NSF to have a blacklist of media sites that our teams systematically refuse to engage with, especially if it includes domestic sites."

By September 2021, 12 Track F Phase 1 teams were awarded up to $750,000 per team, a total of 9 million. Six teams advanced to Phase 11 and were awarded up to $5 million per team, "or $30 million total over 24 months. All awards are listed in Appendix A of the 79-page report. Appendix B shows emails documenting the NSF Track F media strategy. Appendix D contains U of M's WiseDex First Pitch Slide Deck given on Oct. 26, 2021. Appendix D includes the details of MIT's Search Lit Phase 1 proposal from 2021. 




 

Get the latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 uncoverdc.com