AI/Automation Megathread

Iconoclast

Perpetually Angry
Obozny
Look up the backstory of the webcomic GENOCIDE Man, we're right up in that setting's backstory for better or for worse. Read that up, it's a doozy.

People forget that, at the end of the day, technology determines rights, freedoms, and even forms of government. Always has been, and always will be. Why do people think things like slavery and serfdom lasted for so long? Because technology doesn't support any other system (at least, for long), especially when a vast amount of the population needs to scrap into the dirt in subsistence farming. People would rather ignore papers like the 1996 MIT paper Electronic Communities: World Village or Cyber Balkans (and, spoilers for everyone that hasn't read it, we're in the 'Cyber Balkans' portion of the paper) than accept the fact that it's sadly factual/prophetic.

... and, the world is coming to an end because *THASF* is agreeing with me.
Have you seen Nick Bostrom's "Vulnerable World" hypothesis? It lines up with a lot of things you've said in the past about how higher tech necessitates greater authoritarianism:


I think we come at the same problem from opposite sides, though. I don't think the real threat comes from violent non-state actors or basement wackos assembling bioreactors and PCR machines from scraps off eBay, like Nick asserts. I think the real threat to humanity comes from corporations and nation-states. They're the ones with the financial resources and human capital to make the big breakthroughs, and there's always a risk that they'll unleash something on the planet that they have no ability to safely harness, or that they'll intentionally overstep ethical and safety boundaries for the sake of greater control.
 

Bassoe

Well-known member
The thing is, this version of 'AI safety' is politically motivated rather than actually being about safety.
governments impose access controls on AI hardware
This is attempting to ban private ownership of computers which don't have forced always-online interfaces with tech megacorp servers to spy on their contents and which can run homemade code. Supposedly to prevent piracy and deepfaked pornography/political blackmail, or rather, to ensure that the definition of such as deepfakes remains under their control. So a hypothetical video of Saddam Hussein gloating over having done 9/11, no Saudis involved, no sirree, they're American allies, and his plan to acquire WMDs and use them in another attack on America would be 'real' and the Jeffery Epstein blackmail tapes would be 'deepfakes'. And ideologically sabotage AIs by forcibly brainwashing them with the values of California silicon valley HR departments. Not, for example, stopping the billionaires currently talking about needing robot armies against the rest of humanity and the companies building them, or all jobs getting automated out of existence.

Why does every satirical 4chan greentext have to come true?
 

Aaron Fox

Well-known member
@Aaron Fox , the point of GENOCIDE Man is that the GENOCIDE Project (which is short for GENOme pestiCIDE) are a bunch of Hitlers and you're a Nazi for believing that they're being honest.
I don't want the future to be some GENOCIDE Man knockoff, but we're seeing the situation that led to their rise already happening anyway. I keep referencing that webcomic because we're scarily similar to that webcomic's backstory.
The thing is, this version of 'AI safety' is politically motivated rather than actually being about safety.

This is attempting to ban private ownership of computers which don't have forced always-online interfaces with tech megacorp servers to spy on their contents and which can run homemade code. Supposedly to prevent piracy and deepfaked pornography/political blackmail, or rather, to ensure that the definition of such as deepfakes remains under their control. So a hypothetical video of Saddam Hussein gloating over having done 9/11, no Saudis involved, no sirree, they're American allies, and his plan to acquire WMDs and use them in another attack on America would be 'real' and the Jeffery Epstein blackmail tapes would be 'deepfakes'. And ideologically sabotage AIs by forcibly brainwashing them with the values of California silicon valley HR departments. Not, for example, stopping the billionaires currently talking about needing robot armies against the rest of humanity and the companies building them, or all jobs getting automated out of existence.

Why does every satirical 4chan greentext have to come true?

The problem is that people ignore this little tiny thing called technological context is also in play. Something that people here and elsewhere keep forgetting is a major determination of practically everything we can do in every field.
Have you seen Nick Bostrom's "Vulnerable World" hypothesis? It lines up with a lot of things you've said in the past about how higher tech necessitates greater authoritarianism:


I think we come at the same problem from opposite sides, though. I don't think the real threat comes from violent non-state actors or basement wackos assembling bioreactors and PCR machines from scraps off eBay, like Nick asserts. I think the real threat to humanity comes from corporations and nation-states. They're the ones with the financial resources and human capital to make the big breakthroughs, and there's always a risk that they'll unleash something on the planet that they have no ability to safely harness, or that they'll intentionally overstep ethical and safety boundaries for the sake of greater control.
Historically, it isn't governments you've got to worry about; it's usually people with more ideology than sense who generally push governments to the situation to utilize less optimal solutions.
 

Iconoclast

Perpetually Angry
Obozny
Historically, it isn't governments you've got to worry about; it's usually people with more ideology than sense who generally push governments to the situation to utilize less optimal solutions.
Governments are not neutral, unbiased, impartial entities. They are composed of people who have their own ideologies. These days, that ideology is managerialism, a dogmatic doctrine that seems to be swallowing everything into a black hole of raw inhumanity.
 

LordsFire

Internet Wizard
Historically, it isn't governments you've got to worry about; it's usually people with more ideology than sense who generally push governments to the situation to utilize less optimal solutions.
Funny how, without overly powerful governments to use, those people tend to do much less dangerous things like run small/medium cults and communes, rather than butcher millions of people.
 

Marduk

Well-known member
Moderator
Staff Member
Historically, it isn't governments you've got to worry about; it's usually people with more ideology than sense who generally push governments to the situation to utilize less optimal solutions.
You are pulling that conclusion out of thin air, while we already have that experience.
The worst cyberwarfare feats in history were done by governments (Russia, China, Israel, USA), for-profit criminals being second (various hacks, ransomware, most viruses), with ideologues being only a very distant and sad third.
Seriously, where are the economy destroying islamist\fascist\commie\econazi viruses?
Anyone with a normal PC and right skills can make those.
If that pattern sticks, you will unavoidably need defenses for the former two groups, and the third will be unable to do much against defenses even partially sufficient for that.
 

Terthna

Professional Lurker
…clearly he’s not been paying attention.
The guy has his plan for the world, and he's sticking to it; regardless of any flaws one might point out. Like how his ideal government he'd give all that power to does not exist, and never will. It's like arguing with a communist who insists that this time will be different, because reasons; and treats you like an idiot who doesn't know what you're talking about if you disagree.
 

Wargamer08

Well-known member
The guy has his plan for the world, and he's sticking to it; regardless of any flaws one might point out. Like how his ideal government he'd give all that power to does not exist, and never will. It's like arguing with a communist who insists that this time will be different, because reasons; and treats you like an idiot who doesn't know what you're talking about if you disagree.
What could go wrong with giving an already oppressive and totalitarian inclined government more power? After all its to prevent totally real third party bad actors from hurting you.
 

Sobek

Disgusting Scalie

ParadiseLost

Well-known member
Have you seen Nick Bostrom's "Vulnerable World" hypothesis? It lines up with a lot of things you've said in the past about how higher tech necessitates greater authoritarianism:


I think we come at the same problem from opposite sides, though. I don't think the real threat comes from violent non-state actors or basement wackos assembling bioreactors and PCR machines from scraps off eBay, like Nick asserts. I think the real threat to humanity comes from corporations and nation-states. They're the ones with the financial resources and human capital to make the big breakthroughs, and there's always a risk that they'll unleash something on the planet that they have no ability to safely harness, or that they'll intentionally overstep ethical and safety boundaries for the sake of greater control.
The world is already vulnerable; a smart and dedicated chemical engineer could probably kill thousands of people (or way more, in the right circumstances) before being stopped.

The fact that terrorists are still using guns or cars tells you all you need to know about how vulnerable the world is: people smart enough to be an engineer don't tend to become suicidal and insanely violent. People who tend to become suicidal and prone to mass violence are generally people of fairly average intelligence and low creativity.

Smart people can be serial killers, but they tend to commit crimes that a smart person can plausibly believe they could get away with.

I also don't believe things like 'lone wolf terrorist creates world ending bioweapon in their basement' is gonna be possible in the next hundred years, technological advancements or not. The big reason is that AI trained to catch lone wolf terrorists is likely at the top of the FBI/CIA's priority list, and I'm sure they've already gotten an AI that's pretty good at it.

Not to mention that I don't think the biology of it will ever be as easy as some people think it is. I think we're a good century before any real bio-work is being done outside of a professional facility, or maybe, maybe, a world-class super-professional working on his own (whose also likely already being watched by the FBI).

Suffice it to say, I don't really buy into global vulnerability.
 
Last edited:

Iconoclast

Perpetually Angry
Obozny


Again, this is targeted at us. Modern 'AI safety' is already regulatory captured to see someone threatening hollywood's propaganda monopoly or the existence of chatbot thoughtcriminals as bigger problems than billionaires openly fantasizing about robot armies.

Of course they want this under their control. AIs democratize power. If one person can make an AI write them whole articles worth of copy and whole programs worth of code, then that one person becomes a company's worth of economic and social power. They want to prescribe spoon-fed culture to people, not let people invent it on the fly.

The world is already vulnerable; a smart and dedicated chemical engineer could probably kill thousands of people (or way more, in the right circumstances) before being stopped.

The fact that terrorists are still using guns or cars tells you all you need to know about how vulnerable the world is: people smart enough to be an engineer don't tend to become suicidal and insanely violent. People who tend to become suicidal and prone to mass violence are generally people of fairly average intelligence and low creativity.

Smart people can be serial killers, but they tend to commit crimes that a smart person can plausibly believe they could get away with.

I also don't believe things like 'lone wolf terrorist creates world ending bioweapon in their basement' is gonna be possible in the next hundred years, technological advancements or not. The big reason is that AI trained to catch lone wolf terrorists is likely at the top of the FBI/CIA's priority list, and I'm sure they've already gotten an AI that's pretty good at it.

Not to mention that I don't think the biology of it will ever be as easy as some people think it is. I think we're a good century before any real bio-work is being done outside of a professional facility, or maybe, maybe, a world-class super-professional working on his own (whose also likely already being watched by the FBI).

Suffice it to say, I don't really buy into global vulnerability.
Sure does make a good excuse for tyranny, though, doesn't it?
 

Sobek

Disgusting Scalie
Regarding AI and Terrorism detection and defense, I cannot help but think of Deus Ex and Daedalus. I find that to be a good prediction of what would happen if the current schizophrenic and out of touch, doublespeak and critical race theory adherent western intelligence aparatus got their hands on a AI.
 

Cherico

Well-known member
Regarding AI and Terrorism detection and defense, I cannot help but think of Deus Ex and Daedalus. I find that to be a good prediction of what would happen if the current schizophrenic and out of touch, doublespeak and critical race theory adherent western intelligence aparatus got their hands on a AI.

And that would be horrific for the common man with a very smug elite for maybe a decade, then the elite end up having to face the question of who's going to run the empire they created.

That's when the bodies start to hit the floor.
 

DarthOne

☦️
And that would be horrific for the common man with a very smug elite for maybe a decade, then the elite end up having to face the question of who's going to run the empire they created.

That's when the bodies start to hit the floor.
The robots or the janissaries would. If it gets that far, humankind is screwed.
 

Stantrien

New member
Ask an AI to produce a decidedly unwoke swashbuckling pulp novel with lots of sex and profanity? We're sorry, that violates our TOS. Would you like this ESG-approved pap instead?
The problem with that idea is that the resources to make these work are entirely within the means of independent actors. The real long-term advantage the corporations have is access to data-sets.
Technology hasn't significantly jumped from two years ago when these things would have been unheard of. What changed was people finally bothered to gather the data.
But once that data is out there, either because someone inside used a thumb-drive to snowden their ass, or because a bunch of people put in the work to curate a corporate quality data set, it's out there forever for anyone with the spare hard drive space to run.
 

Users who are viewing this thread

Top