War against the Machines

JagerIV

Well-known member
Partially inspired by the other thread, I figure we talk about the war against the machines, how it could be fought, relative strengths vs weaknesses of the two, and how humanity could stand a chance against the cold, hard machines. Mostly building off a Terminator inspired senario, though we may diverge from it in discussion.

For my initial two cents, I will look at logistics.

Currently, there are about 2-3 million industrial robots by wiki (so clearly this analysis is built on some quick research: be forewarned). Obviously, these are dramatically outnumbered by human manufacturers right now (12 million abouts in the United States, about a 100 million in China, and 30 million in the EU, to name the really big players)

The current robots are not all that cheap either: based on the industrial robot sales data, they seem to suggest an average per unit cost of around $40,000 dollars, and a per year upkeep of about $10-20k. Human upkeep can be much, much lower than that. Food (admittedly with a very advanced economy) can feed someone quite well for about $10 dollars a day, at far above actual lean rations of bread and water. This is a basic upkeep of the person for about $4,000. Round up to $5,000. That can keep a person in reasonable comfort. At slave/forced labor rates, where the workers comfort or long life isn't really a concern, you can push that down into the sub $1,000 per year range.

Now, for Synet's plan to be possible at all, you likely need to have a higher starting degree of automation, and the initial nuking evens the odds a bit, but it seems likely to not be so much more advanced to radically shift these numbers (given in the Terminator series Skynet activates at a point necessary to save itself, not at a theoretically optimal point to conquer the world).

However, within reasonable extrapolation, it seems quite possible that even after nuking, humanity has a overall larger number of manufacturers, with the potential of a larger in GDP capacity than skyet can manage on robots alone, and that people can operate at a potentially lower absolute cost than the robots: a lot of manufacturing robots are cheap compared to currently very expensive assembly line worker, it might not be all that cost effective compared to that same assembly line worker conscripted and forced to work at subsistence rations for the good of humanity.

This absolutely lower cost to use humans in many situations, combined with an initially weaker economic base and limited numbers of robots available, suggests as reasonable something that came up in Terminator I: the use of human slaves by skynet. Even with something like judgement day killing off half of humanity, your still talking about some 3-4 billion humans remaining. Especially in the short term, enslaving humans is going to be much cheaper than building new robots (keeping a human alive by above numbers is about 1/10 the price of building a new, relatively simple robot). What robot manufacturing you do have is likely going to be focused on building military machines who you will need to be loyal, and where I suspect (we can discuss this later) the advantages of machines will be more impactful than in manufacturing.

Thus, Skynet seems more likely to be an elite, relatively small in number (do to some relatively high cost) elite machine military ruling over a large economy of human slaves of various degrees of aggressive exploitation. Without human slaves, the machine military may not be self supporting, especially early on. However, the need to preserve humans in the Machine's economy machine imposes certain, rules of engagement for lack of a better word.

For instance, releasing biological weapons is much less of an option. Since your capturing territory not just to kill people or gain natural resources, but to capture slaves to add to the work force. Killing everyone in an area denies Skynet a fresh supply of workers for the slave camps, and even if the biological weapon worked for clearing out an area, if it infects one of the new slaves who then infects the slave camps, a campaign in a region may actually result in the industrial capacity of Skynet going down as its own workforce is incapacitated, either with sickness or death, making a tactical victory into a strategic defeat.

You have a similar concern with anything that makes an area relatively permanently uninhabitable, say chemical or radiation weapons. Ignoring anything which may be bad for humans and Robots (say a poison gas that can get through a gas mask can also get through something on a war robot and cause problems for some sensitive piece of equipment), part of the goal of conquest would be, besides gaining more human slaves, to set up work camps staffed by humans to extract resources and build infrastructure for the war effort. For example, lets say Dresden was only lightly nuked, and some sort of human resistance strongpoint. Skynet could just nuke it flat, but that would deny it both the chance to take more slaves, and if it used any sort of lingering long term weapon of mass destruction, that would undermine its ability to exploit the huge amount of salvage in the area. Ideally, it would prefer to, as soon as it conquered the area move, say, 100,000 slaves into the area and have them strip it of all machine parts, metal, bricks, and anything that might be of use to further the war effort of Skynet.

Thus, Skynet is (at least somewhat) forced to use relatively conventional weapons with the human resistance, and it can't be quite as flagrant with the "kill masses of humans" weapons as one might think. Which is of course a small boon for the human resistance.
 
Last edited:

PsihoKekec

Swashbuckling Accountant
Haven't seen Terminator movies past the 1&2, but I reckon that most of the people wouldn't know in the begining that it was SkyNet that launched the nukes on it's own, but that it was reaction to Chinese and Russian attacks. If it controlled early warning satelites it could easily pulled such deception and then start sending around orders to surviving local authorities in lieu of actual government, thus convincing them to create infrastructure for its war machine.
 

Scottty

Well-known member
Founder
Haven't seen Terminator movies past the 1&2, but I reckon that most of the people wouldn't know in the begining that it was SkyNet that launched the nukes on it's own, but that it was reaction to Chinese and Russian attacks. If it controlled early warning satelites it could easily pulled such deception and then start sending around orders to surviving local authorities in lieu of actual government, thus convincing them to create infrastructure for its war machine.

And it's a central plot-point that SkyNet's robot agents can pass themselves off as people.
 

JagerIV

Well-known member
It certainly justifies a better reason to go down the Terminator development program: it creates a need for Skynet to engage in infiltration, subversion, and, as a major variation from how it is generally shown in the movies, politics.

For example, say Skynet needs some continuing mining in an undeveloped region like the Congo, you may have Skynet needing to play some sort of politics to get a local human power structure to keep the mines working. So you could have areas under Skynet where Skynet is actually pretty thin on the ground.

And even in tightly controlled areas, your still likely going to need it to operate through some human power structures, you still likely have something like the Judenrat and Jewish Getto Police groups in the work camps.

Which could create some interesting story situations. For example, a somewhat morally icky situation where you have a Judenrat equivalent in some work camp which has condemned millions of their fellow humans to death, but are only now reaching out and offering the Resistance something valuable because they realize their usefulness to Skynet is coming to an end are looking to save their skin.
 

JagerIV

Well-known member
Something I just thought of: how difficult, realistically, would it be for the resistance to seize (skynets) means of production?

Specifically thinking of any highly automated factories. Skynet seems to be in general pretty restrictive in passing out other high functioning AIs to its underlings, so a "lights out" Skynet factory might not have all that much in the way of independently intelligent units in a factory, just a lot of relatively dump manufacturing machines.

Which suggests seizing a Skynet factory, pretty much intact, is as simple as seizing control of the central control room/AI manager hub. How much could Skynet reasonably secure against a facility being seized like this? I know encryption can be pretty good, but its my understanding a lot of that is much more questionable when someone can interact directly with the machinery and wires.
 

Darth Robbhi

Protector of AA Cruisers, Nemesis of Toasters
Super Moderator
Staff Member
And it's a central plot-point that SkyNet's robot agents can pass themselves off as people.
Doesn't that inevitably lead to Blade Runner and Battlestar Galactica, where robots empathize with, or even become, human?
 

Laskar

Would you kindly?
Founder
And it's a central plot-point that SkyNet's robot agents can pass themselves off as people.
That was later in the war, probably in the last few years. In the first movie, Kyle Reese explains that the T-600 series infiltrators had rubber skin and were spotted easily.

Doesn't that inevitably lead to Blade Runner and Battlestar Galactica, where robots empathize with, or even become, human?
Not if you turn off their ability to learn freely. In a deleted plot point from Terminator 2, the Terminator's learning capacity is greatly constrained to keep them from going rogue.
 

Husky_Khan

The Dog Whistler... I mean Whisperer.
Founder
Not if you turn off their ability to learn freely. In a deleted plot point from Terminator 2, the Terminator's learning capacity is greatly constrained to keep them from going rogue.

I haven't seen Terminator Dork Fate yet but
I wonder if they explain how Ahhnuld's Terminator matured after completing his mission? Or if it's just another ignored plot hole/continuity lapse.
 

Laskar

Would you kindly?
Founder
I haven't seen Terminator Dork Fate yet but
I wonder if they explain how Ahhnuld's Terminator matured after completing his mission? Or if it's just another ignored plot hole/continuity lapse.
If it's a deleted scene, then it means that it is easy to discount as canon.
 

JagerIV

Well-known member
You think that would be a serious problem, yes.

I'm not sure it would be that serious a problem: it didnt seem to ever be a problem for the concentration camps to any significant degree, or for most of history's slave owners, and your looking at a broader difference than that.

It would be somewhat cold comfort, assuming the task masters did care, that the robot whipping you to work harder feels a little bad a out working you to death.
 

Darth Robbhi

Protector of AA Cruisers, Nemesis of Toasters
Super Moderator
Staff Member
I'm not sure it would be that serious a problem: it didnt seem to ever be a problem for the concentration camps to any significant degree, or for most of history's slave owners, and your looking at a broader difference than that.

It would be somewhat cold comfort, assuming the task masters did care, that the robot whipping you to work harder feels a little bad a out working you to death.
There is a big difference between an overseer of Group A feeling sorry for people of Group B they are exploiting, and someone of Group A thinking they belong to Group B.

Especially if they don't know they are part of Group A, because it makes for better cover. We saw that with Cylons, iRobot, Data's mother and Replicants, and I would not be surprised if robot spies were not told they were robots, so they could lie better.

Self-preservation is part and partial of sentience, and usually the justification for human-robot war. Unless your programming is really slick, a robot that thinks it is a human, or wants to be a human, is going to side with humans.
 

JagerIV

Well-known member
There is a big difference between an overseer of Group A feeling sorry for people of Group B they are exploiting, and someone of Group A thinking they belong to Group B.

Especially if they don't know they are part of Group A, because it makes for better cover. We saw that with Cylons, iRobot, Data's mother and Replicants, and I would not be surprised if robot spies were not told they were robots, so they could lie better.

Self-preservation is part and partial of sentience, and usually the justification for human-robot war. Unless your programming is really slick, a robot that thinks it is a human, or wants to be a human, is going to side with humans.

Ah, gotcha. I'm still not sure its a big problem. I mean, if you have a deep cover AI thats not aware its a robot, then how is it properly carrying out missions to support the robot cause? A robot that doesnt know its a robot doesnt really make sense to me. I dont see the benifit.

Maybe you get some defectors, but I'm not sure it would be a materially signifigant number.
 

Scottty

Well-known member
Founder
Ah, gotcha. I'm still not sure its a big problem. I mean, if you have a deep cover AI thats not aware its a robot, then how is it properly carrying out missions to support the robot cause? A robot that doesnt know its a robot doesnt really make sense to me. I dont see the benifit.

Maybe you get some defectors, but I'm not sure it would be a materially signifigant number.

It would do things without consciously knowing why it wanted to - and feel an unwillingness to think overmuch about it.
 

Darth Robbhi

Protector of AA Cruisers, Nemesis of Toasters
Super Moderator
Staff Member
Ah, gotcha. I'm still not sure its a big problem. I mean, if you have a deep cover AI thats not aware its a robot, then how is it properly carrying out missions to support the robot cause? A robot that doesnt know its a robot doesnt really make sense to me. I dont see the benifit.

Maybe you get some defectors, but I'm not sure it would be a materially signifigant number.
The obvious one is a sleeper or deep cover agent, which is what the Cylons did with Boomer. Now, theoretically, you would want your programming to override the sleeper robot's will, but a lot of that would depend on just how robots gained sentience, and whether sentient robots are subject to the same Stockholm Syndrome pressures as humans.

The other interesting line to explore are robots who think they are humans, empathize with humans, and then discover they are robots. Pre- and during a robot-human war, there are lots of advantages to robots that look and act like humans. Said roles range from robots who serve and interact with humans, to robots hiding from humans, to even robots built to allow humans to live again, like Data's mom.

Even overtly robot robots may suffer pangs of conscience if sentient. We've already seen humans get attached to even expendable robots. The manufacturers of EOD robots have lots of stories of soldiers in tears bringing broken bots to repair areas begging the bots be fixed. So even knowing robots may well change their minds about humans and a human-robot war.
 

Users who are viewing this thread

Top