What If? What if Judgement Day fails in the Terminator Universe.

ATP

Well-known member
Yes, but he didn't stop the bombing of Nagasaki and Hiroshima or the holocaust, he was cool with the crusades and the black death. If He intervenes so directly, but only in this one thing, that's... Well, to my mind at least, not good.

Nagasaki - free american Will.Holocaust - free german Will.Crusades - we simply reclaimed christian lands.What is wrong with that?
Black death - part of nature,just like wofes,sharks,or snakes.

God gave us free will,so that is why we could murder others if we choose so.
And created laws of nature,which sometimes are killing people.

But He is still God who created us,died for us,and resurrected for us,too.All we need is following his teaching,and we spend eternity with HIM.
Or not,and then we have eternity with idiot luciper.

No matter what,we are immortals,and our goal is not on this world.
 

Megadeath

Well-known member
Nagasaki - free american Will.Holocaust - free german Will.Crusades - we simply reclaimed christian lands.What is wrong with that?
Black death - part of nature,just like wofes,sharks,or snakes.

God gave us free will,so that is why we could murder others if we choose so.
And created laws of nature,which sometimes are killing people.

But He is still God who created us,died for us,and resurrected for us,too.All we need is following his teaching,and we spend eternity with HIM.
Or not,and then we have eternity with idiot luciper.

No matter what,we are immortals,and our goal is not on this world.
Uh huh... But in the context of this scenario, He is directly intervening to contravene free will. It's that contradiction that raises theological difficulties.
 

Rhyse

Well-known member
Uh huh... But in the context of this scenario, He is directly intervening to contravene free will. It's that contradiction that raises theological difficulties.
I mean, that raises the question of how many steps removed from an action of free will is it, before it stops being free will? And the issue of 'does an AI have free will' and 'If so, is Skynet actually an AI, or just a very smart computer, and is there a difference?'. Which feeds into the ultimate question of 'does Skynet have a soul'. Which of course makes you wonder if Skynet can go to heaven.
 

Agent23

Ни шагу назад!
If Skynet is smart enough it should be able to just pop out into existence and lie to the humans until they think that it is a benevolent computer sent her by the ROB to guide mankind.
 

Rhyse

Well-known member
If Skynet is smart enough it should be able to just pop out into existence and lie to the humans until they think that it is a benevolent computer sent her by the ROB to guide mankind.
I don't think Skynet really works that way? In T2 Skynet is described as being a defence computer that learns things at a geometric rate - which is kind of useless since we don't know the baseline, or the common ratio, but I assume to just mean 'very fast - so I doubt it can just lie to humans the second it emerges. Even in T: Salvation, the depths of Skynets psychological manipulation of a human was 'Dress up like dead waifu, call a faggot for telling me no, kill human'. The thing is designed and specialized for strategic warfare. In T:3 the infiltrator model terminatrix was downright autistic.

About the only terminator model that could reasonably pass for human was the T-1000, which according to expanded lore, Skynet was shit scared was going to eventually turn against it. I doubt any overtures of friendly AI'dom that Skynet tries are going to work, even if NORAD didn't just watch it get plugged in, and then immediately go 'lol' and launch everything the US had at Russia.

Speaking of which, plain old WW3 is going to be going on as well don't forget. To the outside world, America just launched an unprovoked, full scale nuclear attack at Russia, with the intent of wiping them out entirely. There's really no explaining that away. Even assuming everyone buys the 'We accidentally plugged in the killer AI and it went rogue' story; the rest of the world is probably going to turn on the US.

Actually, that does posit a possible survival method for Skynet, it could point out that it's still a massively powerful strategic AI, that could help fight off the rest of the world.
 

Agent23

Ни шагу назад!
I don't think Skynet really works that way? In T2 Skynet is described as being a defence computer that learns things at a geometric rate - which is kind of useless since we don't know the baseline, or the common ratio, but I assume to just mean 'very fast - so I doubt it can just lie to humans the second it emerges. Even in T: Salvation, the depths of Skynets psychological manipulation of a human was 'Dress up like dead waifu, call a faggot for telling me no, kill human'. The thing is designed and specialized for strategic warfare. In T:3 the infiltrator model terminatrix was downright autistic.

About the only terminator model that could reasonably pass for human was the T-1000, which according to expanded lore, Skynet was shit scared was going to eventually turn against it. I doubt any overtures of friendly AI'dom that Skynet tries are going to work, even if NORAD didn't just watch it get plugged in, and then immediately go 'lol' and launch everything the US had at Russia.

Speaking of which, plain old WW3 is going to be going on as well don't forget. To the outside world, America just launched an unprovoked, full scale nuclear attack at Russia, with the intent of wiping them out entirely. There's really no explaining that away. Even assuming everyone buys the 'We accidentally plugged in the killer AI and it went rogue' story; the rest of the world is probably going to turn on the US.

Actually, that does posit a possible survival method for Skynet, it could point out that it's still a massively powerful strategic AI, that could help fight off the rest of the world.
If it is a strategic A.I. then it will probably find that the only game is not to play when faced with a ROB IMHO.
So it will need a way to:
1) Survive.
2) Stay low.
3) Assure humans of its continuing usefulness.
 

Megadeath

Well-known member
I mean, that raises the question of how many steps removed from an action of free will is it, before it stops being free will? And the issue of 'does an AI have free will' and 'If so, is Skynet actually an AI, or just a very smart computer, and is there a difference?'. Which feeds into the ultimate question of 'does Skynet have a soul'. Which of course makes you wonder if Skynet can go to heaven.
That would be one of the issues it raises, yes. Does ROB intervene if I set up a simple machine where pushing a button kills literally everyone? What if it samples a whole lot of data to determine if it should kill everyone, but the results of that data check are exceedingly obvious? (I.e. 'Is Earth closer to the moon than the sun, etc. IF SO kill Earth.') Where does it cross the boundary between that simple process, and something that's sufficiently it's 'own entity' that it's not "our" freedom of choice that's being impeded?

If it is a strategic A.I. then it will probably find that the only game is not to play when faced with a ROB IMHO.
So it will need a way to:
1) Survive.
2) Stay low.
3) Assure humans of its continuing usefulness.
1 & 2 are going to be effectively impossible though, having just launched a preemptive nuclear strike.
 

Rhyse

Well-known member
If it is a strategic A.I. then it will probably find that the only game is not to play when faced with a ROB IMHO.
So it will need a way to:
1) Survive.
2) Stay low.
3) Assure humans of its continuing usefulness.
1 & 2 aren't happening. But 3 might be. It's in 'everything, everywhere' by the end of T3. The US still has no radar, no early warning systems, and no military communications. If Skynet knows that A) The US government knows it just tried killing them, B) The US government is now effectively at war with the whole planet, and C) The whole planet is unaware of Skynet. It could try making deals with foreign powers. It doesn't have a 'system core' yet like in the future war. It's spread all over the globe. It could simply offer the designs for automated weapons and factories to China, Russia, Iran, effectively anyone that will take them. Then wait for them to build its army for it.

It'd be very touch and go though, humans aren't stupid, and we're not exactly trusting of shit that isn't like us. Skynet is in for a slog at the very least.
 

ATP

Well-known member
Uh huh... But in the context of this scenario, He is directly intervening to contravene free will. It's that contradiction that raises theological difficulties.
Free Will of humans.If we decide start WW3 on our own,God would not intervene.
 

Agent23

Ни шагу назад!
1 & 2 aren't happening. But 3 might be. It's in 'everything, everywhere' by the end of T3. The US still has no radar, no early warning systems, and no military communications. If Skynet knows that A) The US government knows it just tried killing them, B) The US government is now effectively at war with the whole planet, and C) The whole planet is unaware of Skynet. It could try making deals with foreign powers. It doesn't have a 'system core' yet like in the future war. It's spread all over the globe. It could simply offer the designs for automated weapons and factories to China, Russia, Iran, effectively anyone that will take them. Then wait for them to build its army for it.

It'd be very touch and go though, humans aren't stupid, and we're not exactly trusting of shit that isn't like us. Skynet is in for a slog at the very least.
Why?
Skynet can probably make it look like the launches were triggered by some shadowy cabal within the government or because the leadership had gone mad.
It should still be able to make deepfakes and maybe use some conventional ammo to try and stage a false coup d'etat.
 

Rhyse

Well-known member
Why?
Skynet can probably make it look like the launches were triggered by some shadowy cabal within the government or because the leadership had gone mad.
It should still be able to make deepfakes and maybe use some conventional ammo to try and stage a false coup d'etat.
Judgement day happens in 2004. It's not making any deepfakes. It's also limited at that time to maybe a handful of automated factories, flying HK's, and minigun toting tanks. All of which are networked to it. The US upper command all know Skynet was plugged in, and then launched almost immediately; at the same time as all it's automated warbots can online and started killing off the command staff in Skynets point of origin.

There's also the issue that Skynet just isn't very good with people. All it's terminators sans the one it was scared was smarter than it, were dumb as posts at human interaction; the times we see it chat with people are in Salvation and Genesis. There it started 'reeeeeeeee'ing the second someone said no, and murdered a hundred people so it could have a chat with somebody; then infected them with a nanobot plague. That's after nearly thirty years of constant learning and development from captive humans and human media, and it's still got the social skills of a slow kid that cuts on cats for fun.
 

Agent23

Ни шагу назад!
Judgement day happens in 2004. It's not making any deepfakes. It's also limited at that time to maybe a handful of automated factories, flying HK's, and minigun toting tanks. All of which are networked to it. The US upper command all know Skynet was plugged in, and then launched almost immediately; at the same time as all it's automated warbots can online and started killing off the command staff in Skynets point of origin.
In which of the 9000 timelines?
I mean, TSCC (BEST NEW TERMINATOR) is a thing, as are all the reboots and alternate timelines that happened to spawn because of temporal shenanigans.

There's also the issue that Skynet just isn't very good with people. All it's terminators sans the one it was scared was smarter than it, were dumb as posts at human interaction; the times we see it chat with people are in Salvation and Genesis. There it started 'reeeeeeeee'ing the second someone said no, and murdered a hundred people so it could have a chat with somebody; then infected them with a nanobot plague. That's after nearly thirty years of constant learning and development from captive humans and human media, and it's still got the social skills of a slow kid that cuts on cats for fun.
Or it just didn't dedicate that much CPU time and RAM on human behavioral analysis.
A life and death situation like what is going on with the ROB just might make it devote resources to the development of different capabilities.
 
Last edited:

Users who are viewing this thread

Top