Free Speech and (Big Tech) Censorship Thread

Husky_Khan

The Dog Whistler... I mean Whisperer.
Founder
This thread needs more bumps because it happens so often... for example.


Excellent. If it worked in countries mired in civil strife and authoritarianism like Sri Lanka and Myanmar, it should work even better in the United States. But whom will we get to utilize these tools?


Oh... nevermind. I guess Facebook covered all their bases. (y)
 

Cherico

Well-known member
This thread needs more bumps because it happens so often... for example.


Excellent. If it worked in countries mired in civil strife and authoritarianism like Sri Lanka and Myanmar, it should work even better in the United States. But whom will we get to utilize these tools?


Oh... nevermind. I guess Facebook covered all their bases. (y)

that's pretty much treason isn't it? Having forging nationals censor americans? I mean we don't just have one smoking gun we have a literal pile of them maybe the republicans need to find their balls and handle this now?
 

Husky_Khan

The Dog Whistler... I mean Whisperer.
Founder
Wired Magazine did a biiiiiiig article on Youtube's AI based algorithm and how it's going to town on conspiracies dealing with QAnon, the WuFlu and the poor Flat Earthers.

Article said:
For four years, Sargent's flat-earth videos got a steady stream of traffic from YouTube's algorithms. Then, in January 2019, the flow of new viewers suddenly slowed to a trickle. His videos weren't being recommended anywhere near as often. When he spoke to his flat-earth peers online, they all said the same thing. New folks weren't clicking. What's more, Sargent discovered, someone—or something—was watching his lectures and making new decisions: The YouTube algorithm that had previously recommended other conspiracies was now more often pushing mainstream videos posted by CBS, ABC, or Jimmy Kimmel Live, including ones that debunked or mocked conspiracist ideas. YouTube wasn't deleting Sargent's content, but it was no longer boosting it. And when attention is currency, that's nearly the same thing.

“You will never see flat-earth videos recommended to you, basically ever,” he told me in dismay when we first spoke in April 2020. It was as if YouTube had flipped a switch.

In a way, it had. Scores of them, really—a small army of algorithmic tweaks, deployed beginning in 2019. Sargent's was among the first accounts to feel the effects of a grand YouTube project to teach its recommendation AI how to recognize the conspiratorial mindset and demote it. It was a complex feat of engineering, and it worked; the algorithm is less likely now to promote misinformation. But in a country where conspiracies are recommended everywhere—including by the president himself - can't fix what's broken.

It later goes into detail how they developed their new AI that classifies "borderline" material that they felt should be down ranked.

But what about content that wasn't quite bad enough to be deleted? Like alleged conspiracies or dubious information that doesn't advocate violence or promote “dangerous remedies or cures” or otherwise explicitly violate policies? Those videos wouldn't be removed by moderators or the content-blocking AI. And yet, some executives wondered if they were complicit by promoting them at all. “We noticed that some people were watching things that we weren't happy with them watching,” says Johanna Wright, one of YouTube's vice presidents of product management, “like flat-earth videos.” This was what executives began calling “borderline” content. “It's near the policy but not against our policies,” as Wright said.

By early 2018, YouTube executives decided they wanted to tackle the borderline material too. It would require adding a third R to their strategy—“reduce.” They'd need to engineer a new AI system that would recognize conspiracy content and misinformation and down-rank it.

...

To create an AI classifier that can recognize borderline video content, you need to train the AI with many thousands of examples. To get those training videos, YouTube would have to ask hundreds of ordinary humans to decide what looks dodgy and then feed their evaluations and those videos to the AI, so it could learn to recognize what dodgy looks like. That raised a fundamental question: What is “borderline” content? It's one thing to ask random people to identify an image of a cat or a crosswalk—something a Trump supporter, a Black Lives Matter activist, and even a QAnon adherent could all agree on. But if they wanted their human evaluators to recognize something subtler—like whether a video on Freemasons is a study of the group's history or a fantasy about how they secretly run government today—they would need to provide guidance.

YouTube assembled a team to figure this out. Many of its members came from the policy department, which creates and continually updates the rules about the content YouTube bans outright. They developed a set of about three dozen questions designed to help a human decide whether content moved significantly in the direction of those banned areas, but didn't quite get there.

And a concluding remark

By the summer, YouTube was publicly declaring success: It had reduced by 50 percent the watch time of borderline content that came from recommendations. By December it reported a reduction of 70 percent.

...

The channels they grouped under “Social Justice,” on the far left, lost a third of their traffic to mainstream sources like CNN; conspiracy channels and most on the reactionary right—like “White Identitarian” and “Religious Conservative”—saw the majority of their traffic slough off to commercial right-wing channels, with Fox News being the hugest beneficiary.

Then it goes on to tell how 2020 and the pandemic and race riots, sorry George Floyd riots, helped flip up everything all over again and ends with a very sad note, as the Google Engineers seem sad that they cannot control society as easily as they thought they had hoped.

One of the former Google engineers I spoke to agreed: “Now that society is so polarized, I'm not sure YouTube alone can do much,” as the engineer noted. “People who have been radicalized over the past few years aren't getting unradicalized. The time to do this was years ago.”

*points and laughs*
 
Last edited:

Cherico

Well-known member
you know we could have stopped this fuckery in its tracks 2 fucking years ago.

but you know maybe this is a lesson on not being lazy asshats.
 

gral

Well-known member
I don't know if you heard, stocks of Twitter, Facebook and all related companies went up today, after the Election-related news(Facebook had an 8% rise today, I think).
 

Aldarion

Neoreactionary Monarchist

Husky_Khan

The Dog Whistler... I mean Whisperer.
Founder
Steve Bannon banned (heh) from Twitter for stating he wants Fauci's head on a pike. A hyperbole he stated on his podcast... Not Twitter itself.

 

Cherico

Well-known member

I kind of saw this coming.

Big Tech openly declared war on the right, they have no choice but to crush them and have lots of material reasons to do so. The far left for ideological reasons hates big tech because algorthyms do not pay taxes, and the establishment at the end of the day cant let big tech have that much power because it might be used against them one day.

big tech is going to be crushed.
 

Husky_Khan

The Dog Whistler... I mean Whisperer.
Founder

Twitter has decided to expand its news publishing options on its "platform" with helpful popups giving an extra reminder that liking or retweeting certain tweets allegedly containing misinformation are allegedly carrying misinformation.
 

Yinko

Well-known member
Not sure if this is the right thread, but it's a pretty niche thing. I was listening to a fiction podcast today and they used "hetero-normative" as an insult.
"That's pretty hetero-normative, Em"
"Yeah, I'm sorry"
Has anyone here ever heard of this being used this way before? It seems... dumb to insult the vast majority of the population like that.
 

gral

Well-known member
Not sure if this is the right thread, but it's a pretty niche thing. I was listening to a fiction podcast today and they used "hetero-normative" as an insult.
"That's pretty hetero-normative, Em"
"Yeah, I'm sorry"
Has anyone here ever heard of this being used this way before? It seems... dumb to insult the vast majority of the population like that.
I think I have(although some variation of 'cis-normative' is more likely to be used in that way). And yes, it's dumb, to someone like me or you(that is, hetero-normative). However, just by using this sort of 'insult', he shows he(actually, both people on that conversation) doesn't care about anyone who doesn't share their views.
 

Husky_Khan

The Dog Whistler... I mean Whisperer.
Founder

300,000 messages about the election on Twitter were labeled under their 'Civic Integrity Project' which according to them, was 0.2% of the total tweets about the 2020 Elections that were published on their platform.

Twitter also had almost 400 million views on something called "pre-bunk prompts" which are short news articles that Twitter publishes on its platform to inform its users their POV on the elections.
 

Users who are viewing this thread

Top