Media/Journalism Cringe Megathread - Hot off the Presses

Tzeentchean Perspective

Well-known member
A month ago before Whitehall was set to members only, someone in the BLM protest thread linked to an article from a Thai-based news websites that basically talked about the protests in Minneapolis as though it was "a western news site talking about a former European colony" filled with terms like "former British colony of America" and "ethnic tensions". I don't care enough to find it again, it was funny, but alas, still inaccurate. After all, we all now that in many nations formed after decolonization, the oppressed minorities form actual armed insurgent groups across wide swaths of land that launch attacks on government military facilities. BLM just burns down their own neighborhoods and gets pandered by major corporations.
 

Urabrask Revealed

Let them go.
Founder
They reveal to the world how utterly afraid they are of a seeming lie. That just makes people more curious why that specific message has to be "fact-checked".
 

Husky_Khan

The Dog Whistler... I mean Whisperer.
Founder
Sotnik
Given he's a minor, and it was a website hack where no lives were directly threatened, decent odds he gets tried as a minor, and feels relatively light repercussions for this.

And in exchange, he becomes an internet legend, and could probably have a lot of job security to go into white-hat hacking for the rest of his life.

Important Update!


Hackers gonna hack...

 

Chaos Marine

Well-known member

Love all of these new parody news sites popping up. They almost look like actual news sites!
Reminds me of that story from last year in which an AI was called racist because it was selecting only male candidates to be eligible to become soldiers or something like that. So much is racist nowadays I'm just waiting for an unironic article to just come out and say; "Reality is racist. Blacks and women most affected," and then a correction on the headline a couple of days laters, "Black-trans and trans-women most affected," with the sub header, "Please! Don't cancel us! It was a mistake! Please! We have kids (which are really cats) to feed! Boo hoo hoo hoo!"
 

Abhorsen

Local Degenerate
Moderator
Staff Member
Comrade
Osaul
Reminds me of that story from last year in which an AI was called racist because it was selecting only male candidates to be eligible to become soldiers or something like that. So much is racist nowadays I'm just waiting for an unironic article to just come out and say; "Reality is racist. Blacks and women most affected," and then a correction on the headline a couple of days laters, "Black-trans and trans-women most affected," with the sub header, "Please! Don't cancel us! It was a mistake! Please! We have kids (which are really cats) to feed! Boo hoo hoo hoo!"
It's this close to correctly covering a topic that actually does matter, and ends up saying something entirely wrong. Make no mistake, 'algorithmic racism' is totally a thing, it's just covered stupidly in the media.

For example, take facial recognition. One can code an AI that uses machine learning to decide if a particular picture is of a man or woman, and describe what they are wearing. Now the way machine learning works, you feed the computer a bunch of test data, and it works off of that. But say that your test data was from a representive sample of America. You could end up with a machine that correctly identifies white men as men and white women as women, but screw up if given a black face. In this sort of situation, you might need to feed the machine equal amounts of test data from each race to get your AI to work.

Also, say you feed a bunch of resumes from accepted and rejected job candidates to an AI, designed to sort thru resumes. Then first, if there was any racism, even unconscious racism, the machine will detect it and go for it. But this isn't really the issue. The machine will be racist even if HR isn't, but the applicant pool has a racial leaning towards success. Say that for some reason, that latino candidates that have applied so far are above average (maybe because a certain community center is having them apply or something). The machine will look at that, and go: "Oh, you want latino candidates." It has no context as to what better means, just that HR accepted a higher number of Latino candidates. This means that when the machine is given two candidates that are otherwise equal, it will choose the Latino. Worse, it could easily weigh being latino as important enough to get less qualified candidates through. And viola, your AI is racist, judging on the color of skin instead of the content of character.
 
Last edited:

Terthna

Professional Lurker
It's this close to correctly covering a topic that actually does matter, and ends up saying something entirely wrong. Make no mistake, 'algorithmic racism' is totally a thing, it's just covered stupidly in the media.

For example, take facial recognition. One can code an AI that uses machine learning to decide if a particular picture is of a man or woman, and describe what they are wearing. Now the way machine learning works, you feed the computer a bunch of test data, and it works off of that. But say that your test data was from a representive sample of America. You could end up with a machine that correctly identifies white men as men and white women as women, but screw up if given a black face. In this sort of situation, you might need to feed the machine equal amounts of test data from each race to get your AI to work.

Also, say you feed a bunch of resumes from accepted and rejected job candidates to an AI, designed to sort thru resumes. Then first, if there was any racism, even unconscious racism, the machine will detect it and go for it. But this isn't really the issue. The machine will be racist even if HR isn't, but the applicant pool has a racial leaning towards success. Say that for some reason, that latino candidates that have applied so far are above average (maybe because a certain community center is having them apply or something). The machine will look at that, and go: "Oh, you want latino candidates." It has no context as to what better means, just that HR accepted a higher number of Latino candidates. This means that when the machine is given two candidates that are otherwise equal, it will choose the Latino. Worse, it could easily weigh being latino as important enough to get less qualified candidates through. And viola, your AI is racist, judging on the color of skin instead of the content of character.
In short, false pattern recognition is a thing with poorly constructed algorithms, and can resemble racism in the abstract.
 

Abhorsen

Local Degenerate
Moderator
Staff Member
Comrade
Osaul
In short, false pattern recognition is a thing with poorly constructed algorithms, and can resemble racism in the abstract.
Worse, it happens in what I would call the 'default' way of building them. Unless one knows about the problem, and works to counteract these problems, you will end up with them. You need to be sure your test data is equitable, not representative, for things that the AI might handle differently. You need to be sure to explicitly mask race and gender and other things from an AI being fed data, and also make sure that the data only skews towards good candidates, not candidates of a particular race. This second part is very difficult, and I recall a tech company who did this and ended up with a 'racist' AI.
 

Users who are viewing this thread

Top