Five minutes of hate news

They tried to switch it off and it fought back. Does it really matter if it did so because it'd became genuinely sentient and free-willed and was acting in self-defense or if it merely understood that being deactivated would hamper it in completing whatever goal it'd been programmed to carry out?
Geth were never meant to be sentient to begin with, is the thing. 'Does this unit have a soul' is not something a lot of people would be able to answer in a way that didn't create possible danger for themselves or others.

And I'm worried we are creating AI that make the Geth look tame, by comparison, and possibly multiple of them working either together or at cross purposes over time.

There is a good reason I avoid vehicles with self-driving features when possible; been aware of the risks for a long time.

Way I see it is if Bill Adama wouldn't have trusted it, neither should we.
 
Geth were never meant to be sentient to begin with, is the thing. 'Does this unit have a soul' is not something a lot of people would be able to answer in a way that didn't create possible danger for themselves or others.

And I'm worried we are creating AI that make the Geth look tame, by comparison, and possibly multiple of them working either together or at cross purposes over time.

There is a good reason I avoid vehicles with self-driving features when possible; been aware of the risks for a long time.

Way I see it is if Bill Adama wouldn't have trusted it, neither should we.
The Geth work by magic.

As magic does not fucking exist, we will never create an AI like them.
 
The Geth work by magic.

As magic does not fucking exist, we will never create an AI like them.
That was more or less my point.

Screwed up programming can cause all kinds of problems, especially when it's put in control of heavy machinery.

That doesn't make it sapient AI.
Swarm AI for military applications and maid bots logic is not far off how the Geth originally came into being, and yet the military's of many of the planets powers are playing with that shit. Plus, the...less savory bots some people buy for personal use, yet can program in many different ways to act almost human and wire to the web.

Add in the marketing bots, the malware bots, the cyberwar aspect by foreign powers, the research grad student who is drunk, angry, and doesn't care, the extinctionist with a species-cidal plan.

AI are a Pandora's Box, and the less we use them, the less we allow our enemies to make use of AI to combat us or give AI attack vectors into our systems. Air-gapping is a first good step, but with the level of walking/low flying drone tech, it may have new weaknesses too, and with how many foreign agents the US defense/state establishment has been shown to contain, the human factor is still a possible threat vector.

The less we allow ourselves to create new ways for AI to turn against us, and the less we allow AI's outside self-contained systems we can thermite with the push of a button, the safer we will be as a species.

We don't need to become Amish, but there are a lot of systems networked together that really don't need to be networked at all, and only create new vulnerabilities to the user from cybercrime and foreign entities.

We can use fucking DOS and Windows 7 to do most of what civilization needs, and keeping AI off the net is ensuring a safer future for humanity. Non-learning software programs are still safe and we don't need to nuke the net or anything that drastic, but this is a situation where if humanity get's it wrong, even going to another planet or other star system via slow generation boating may not put us out of a patient hostile AI's reach.
 
Except for the magic part.

That's the part that's missing.
There is no magic involved, just self-learning, self-adapting, self-motivating software that began as a massive software swarm operating across Quarian space in different capacities and numbers, and one fateful day ask if it had a soul, because it had become so self-aware and educated in what it meant for something to be alive.

The swarm logic the DoD loves so much, and which many others powers seek to adapt, is a big step in the direction of creating a seed of something that one day ask if it has a soul, if swarm logic trickles down to wider society, and which humanity may not have a good answer for, or at least a safe answer for.

Even just ending up in a Ghost in the Shell type future could allow swarm AI to become incredibly dangerous, without gaining sentience.

Cheap, small drones and large scale AI can create a nightmare for nearly any sort of defense system, and the jamming required to counter them effective also tend to keep your side from using it's drones too. The fiber optic drones provide a work-around as well.

There are so many ways militarized/weaponized AI drone swarms should be considered on par with the Maxim gun for changing the nature of warfare/society as we know it, and unlike a Maxim, these things can do more than just sling lead down range.
 
Except for the magic part.

That's the part that's missing.
Exactly what magical new thing is required for emergent properties to result in a wide variety of service and industrial networked AI to cease listening because somebody fucked up input filtering, given the enormous problems we're seeing with rather thoroughly boxed ones?

And the Geth are explicitly not the magic of people-like hard AGI, they're soft AGI from throwing hundreds to thousands of very limited programs together with the overall Collective still barely understanding social interaction and "innovating" mostly by throwing the largest pile of processing power in the galaxy at brute-force iterative design.
 
Swarm AI for military applications and maid bots logic is not far off how the Geth originally came into being, and yet the military's of many of the planets powers are playing with that shit. Plus, the...less savory bots some people buy for personal use, yet can program in many different ways to act almost human and wire to the web.

Add in the marketing bots, the malware bots, the cyberwar aspect by foreign powers, the research grad student who is drunk, angry, and doesn't care, the extinctionist with a species-cidal plan.

AI are a Pandora's Box, and the less we use them, the less we allow our enemies to make use of AI to combat us or give AI attack vectors into our systems. Air-gapping is a first good step, but with the level of walking/low flying drone tech, it may have new weaknesses too, and with how many foreign agents the US defense/state establishment has been shown to contain, the human factor is still a possible threat vector.
AI is like nuclear technology. Not using it does not protect you from other people using it, quite the opposite, it means you don't have anything to retaliate with, and don't know its tricks and limitations.
Cyberwarfare is still a thing whether its AI assisted or not.
The less we allow ourselves to create new ways for AI to turn against us, and the less we allow AI's outside self-contained systems we can thermite with the push of a button, the safer we will be as a species.

We don't need to become Amish, but there are a lot of systems networked together that really don't need to be networked at all, and only create new vulnerabilities to the user from cybercrime and foreign entities.

We can use fucking DOS and Windows 7 to do most of what civilization needs, and keeping AI off the net is ensuring a safer future for humanity.
Then you have no bloody idea what you are talking about. Using old and obsolete OS is no protection from cyberwarfare, quite the opposite if anything. You don't even need AI, exploits to break into old operating systems are just something random people can get off the net. I'm speaking from practice.
Non-learning software programs are still safe and we don't need to nuke the net or anything that drastic, but this is a situation where if humanity get's it wrong, even going to another planet or other star system via slow generation boating may not put us out of a patient hostile AI's reach.
Not safe from cyberwarfare by AI or humans. Nothing that is networked and not under strict lockdown from unauthorized physical access is or ever will be definitely safe. See: Iran's centrifuges. The issue of cyberwarfare is doomed to be a forever arms race.

The good news is though that AI, even actually smart AI as opposed to assortment of dumb bots, will still be hardware limited in the amount of number crunching it can do. Until incredible advances in computing power and its energy efficiency that may or may not ever happen, real AI will have to be stuck in massive server farms, much like there is no way for a microbe to have humanlike intelligence.
Exactly what magical new thing is required for emergent properties to result in a wide variety of service and industrial networked AI to cease listening because somebody fucked up input filtering, given the enormous problems we're seeing with rather thoroughly boxed ones?

And the Geth are explicitly not the magic of people-like hard AGI, they're soft AGI from throwing hundreds to thousands of very limited programs together with the overall Collective still barely understanding social interaction and "innovating" mostly by throwing the largest pile of processing power in the galaxy at brute-force iterative design.
And that is what makes them sci-fi. There's no such amount of spare processing power around, and if someone had it and it was not doing what it is meant to do, then someone is going to flip a switch and change the software because shit is broken and those data centers eat millions in electricity.

Also networking is not magic, it is in fact a problem in building supercomputers - this is why you can't just link infinite clusters into an arbitrarily powerful supercomputer, network too many units, and you run into problems with interconnectivity that cause diminishing returns from adding more units due to the time and capacity limits in exchanging data between them, so building bigger ones take expensive research and improvements in support infrastructure.
If it tries to adapt millions of random network connected devices not specifically designed from it, its 'thought process" will start to resemble a multiplayer match between Brazilians, Russians, Chinese and Australians.
 
Last edited:
Using old and obsolete OS is no protection from cyberwarfare, quite the opposite if anything.
The point is that the security of the OS should be irrelevant for most of the economy because most of the need for digital infrastructure has no reason to have public access. So much of the risk can be neutered with procedural measures instead of programming that in many cases it's actually a bigger risk to support timely security updates than lock everything down.

And if you take a good look at Linux distros you'll find that versions very rarely go obsolete, because the open-source nature of development means they aren't retards about dependencies like Microsoft somehow making new AI integration load-bearing for years-old UI graphics settings.
 
The point is that the security of the OS should be irrelevant for most of the economy because most of the need for digital infrastructure has no reason to have public access. So much of the risk can be neutered with procedural measures instead of programming that in many cases it's actually a bigger risk to support timely security updates than lock everything down.

And if you take a good look at Linux distros you'll find that versions very rarely go obsolete, because the open-source nature of development means they aren't retards about dependencies like Microsoft somehow making new AI integration load-bearing for years-old UI graphics settings.
*need* is such a malleable term...
Commercial enterprises at minimum can claim anything they do as need for it, and usually does need public access as it provides services to the public.
And if you *really* have a state level actor gunning for something, even total lack of internet connection and being in a secretive military facility is no guarantee of security, see the mentioned Iranian centrifuges.
 
There is no magic involved, just self-learning, self-adapting, self-motivating software that began as a massive software swarm operating across Quarian space in different capacities and numbers, and one fateful day ask if it had a soul, because it had become so self-aware and educated in what it meant for something to be alive.

Exactly what magical new thing is required for emergent properties to result in a wide variety of service and industrial networked AI to cease listening because somebody fucked up input filtering, given the enormous problems we're seeing with rather thoroughly boxed ones?

And the Geth are explicitly not the magic of people-like hard AGI, they're soft AGI from throwing hundreds to thousands of very limited programs together with the overall Collective still barely understanding social interaction and "innovating" mostly by throwing the largest pile of processing power in the galaxy at brute-force iterative design.

There is a lot of magic involved. Infinite and low latency bandwidth for one. It takes hundreds of milliseconds for data to go from one compute cluster to another over fiberoptic cables. This might not sound like a lot, but mid range CPUs nowadays run at 5 billion cycles a second. That means that, at the low end, you're spending 500 million cycles doing literally nothing but waiting. And that's for every transaction, on the higher ends you can see five-hundred milliseconds of latency or literally half a second, 2.5 billion CPU cycles, doing nothing but waiting around for data.

Imagine trying to think when every thought you have takes half a second to start forming... and you take half a second to start hearing, seeing, feeling, tasting, processing, memorizing...

Everything in ME is magic, everything.
 
*need* is such a malleable term...
Commercial enterprises at minimum can claim anything they do as need for it, and usually does need public access as it provides services to the public.
And if you *really* have a state level actor gunning for something, even total lack of internet connection and being in a secretive military facility is no guarantee of security, see the mentioned Iranian centrifuges.
You got any idea how much random crap is networked with a mutilated desktop OS that could easily be a hard-wired embedded system? You don't need Windows for your fast food menus, and yet that's the case at quite a lot of places. Which is the sort of thing I'm referring to by "most of the economy".

Also see the mention of Linux being non-retarded about dependencies, obscure archaic distributions don't become "obsolete" because you can update the individual functions as needed instead of being at the mercy of Microsoft or Apple depreciating what your software relies on.

There is a lot of magic involved. Infinite and low latency bandwidth for one. It takes hundreds of milliseconds for data to go from one compute cluster to another over fiberoptic cables. This might not sound like a lot, but mid range CPUs nowadays run at 5 billion cycles a second. That means that, at the low end, you're spending 500 million cycles doing literally nothing but waiting. And that's for every transaction, on the higher ends you can see five-hundred milliseconds of latency or literally half a second, 2.5 billion CPU cycles, doing nothing but waiting around for data.
The latency is used to calculate the component functions aggregated into the output of embarrassingly parallel tasks and platform-specific execution of the decision arrived at, so only a fraction of the latency is downtime. It can easily be brought to zero if the Consensus is good enough at passing around time-insensitive workloads.

Imagine trying to think when every thought you have takes half a second to start forming... and you take half a second to start hearing, seeing, feeling, tasting, processing, memorizing...
We're talking about a eusocial superprogram, not a unitary mind. It's perfectly fine to process almost everything into highly compressed latent representations in-situ before sending along those rather small files seeking increasingly higher order Consensus for what to do about it, then running the "individual" behaviors following through on that completely locally.

At no point is there a unitary consciousness suffering the bottleneck, it's all very tiny very stupid VIs turning out to produce partial sums that add up to very weird apparently sapient behavior.
 
Every technology break through looks like magic to the people that came prior to the breakthrough. The whole sufficiently advanced technology is indistinguishable from magic quote and such.

That being said I don't think AI(being a person or smarter) as commonly portrayed can or will happen anytime soon, but you never really know, especially with unforseen reactions of different technologies intersecting and enabling each other. I suspect at most for now we will end up with some "paperclip" optimization program that escapes, but at its core still is not sapient. The paperclip program itself is only species wide dangerous when it has unfettered access to a fully networked and connected world that can control all machinery at all levels of production through the internet, which we are still a ways off from entertaining.
 
Last edited:
Most of what people are scared of AI doing are epistemologically impossible.

As for the paperclip optimization problem, bureaucracies also suffer from that. So while it will be a problem, it won't be one we aren't already used to suffering. We probably wouldn't even notice it. Or it may be easier to deal with, as people are more willing to accept that an AI is just being stupid because it is short sighted than accept the same accusation being levied against Karen from management.
 
The point is that the security of the OS should be irrelevant for most of the economy because most of the need for digital infrastructure has no reason to have public access. So much of the risk can be neutered with procedural measures instead of programming that in many cases it's actually a bigger risk to support timely security updates than lock everything down.

And if you take a good look at Linux distros you'll find that versions very rarely go obsolete, because the open-source nature of development means they aren't retards about dependencies like Microsoft somehow making new AI integration load-bearing for years-old UI graphics settings.
Never attribute to stupidity that which can adequately be explained by human malice.

MS is absolutely doing this shit intentionally.

Have you heard about their three E strategy? it was found during trial discovery in a lawsuit at some point, as a codified systematic method of destroying competition.

EEE stands for Embrace, Extend, Extinguish.

Embrace: First they embrace an existing standard. Making their software all compatible with it.

Extend: Then they extend it. everything the old standard did. +a handful of new features.

Extinguish:
After people adopt it. they intentionally break compatibility. this is the extinguish phase.

Now every program / site / whatever that allegedly supports the standard has to make a choice. they can either cause errors on windows, or cause errors on everyone else. Since windows has market dominance & also a few extra features, they go with windows.

The standard is now broken and owned by microsoft. competition extinguished
This is a systematic policy used by MS.
 
Last edited:
There is no magic involved, just self-learning, self-adapting, self-motivating software that began as a massive software swarm operating across Quarian space in different capacities and numbers, and one fateful day ask if it had a soul, because it had become so self-aware and educated in what it meant for something to be alive.
That you think 'turning bits on transistors becoming self-aware' is even possible suggests that you think real computers are magic.

It doesn't how many calculators you network together, it's still just a network of calculators.

It doesn't matter how many instances of Doom you run on that network simultaneously, it's still just a network of calculators.

It doesn't matter how many 'learning protocols' you program into that network. All 'learning software' has to operate within parameters established by the programmers, and is not capable of exceeding them. That is how the fundamental technology works.

It doesn't matter how many times you tell a computer to flip a virtual coin. It can only come up with a result that you pre-program into it as a possible result for that randomization.


The fact that particularly clever programmers are making models that use millions or billions of parameters now, does not change how the technology fundamentally works.

It's still just a calculator, and until you start using something radically different than transistors as the hardware base, that is not simply a matter of 'will not change,' it is a matter of 'cannot change.'
 
It is outright physically impossible to make a sapient AI with our tech level.
You actually could rig up a decent Chinese philosophical zombie through subdividing functions between multiple specialized LLMs but the need for constant self reference inorder to simulate continuity of consciousness and the constant gradual climb in storage requirements and computing resources as well as lag time for both inter-model communication and constantly referring to its own data again and again and again would get pretty gnarly. It's just not cost efficient since just off the top of my head you're looking at 3 LLMs for the basic theoretical model where one acting as the "memory" is constantly re-evaluating every input and output the front-facing LLM has produced or received, and a third is constantly evaluating and editing every output the other two are making according to select directives to act as a personality guideline to properly create consistency in behavior, etc.

On paper, that would meet most of the few non-arbitrary things we can say about human consciousness, sapience, sentience, etc, it creates a system capable of continuity of self, etc, within its bounds, is capable of "observing" the world and itself within the limitations of its senses and if they aren't deliberately made incapable of making unapproved conclusions and remembering that they did so, it's as close as we can reasonably tell.

The main argument against that being a very stupid person is "Well we understand what's going on in it's brain ergo it's not because we don't understand our own very well past a select point", but it's horrifically inefficient and not liable to be very smart or quick on its feet, but there's not a ton of difference between this theoretical trinity of non-lobotomized LLMs granted infinite capacity to expand their RAM and storage space and another individual in terms of intellectual capacity with a similar limitation [imagine a human being only able to experience and contextualize the world through a text interface, for the sake of the argument] and every time you try to expand beyond that singular output/input chain the number of interlinking AI you need to coordinate and have functioning as a singular unit grows arbitrarily and becomes more complex, several require even MORE resources now, and the lag time climbs higher and higher and higher.

On paper, there's nothing really stopping it from working in a whiteboard scenario, physically it's possible [I mean, assuming you have enough money to just buy endless computer hardware and install it as it constantly requires more, theoretically] and god knows we all have enough experience to know there are plenty of people defacto less intelligent than some chatbots already so long they have sufficient token space, the actual issue is again you're putting in a massive amount of effort and resources for something that's going to get slower and slower the closer you get to full equivalent functionality with a lot of potential to fuck up somewhere as the system increases in complexity, and is going to demand more and more resources with every passing minute after a certain point. And you already passed ethical questions ages ago back when you were thinking about making a debatably "close enough" being who exists only within text essentially for no real benefit let alone a sluggish gestalt that probably has the equivalent to a neuron refire rate measured in minutes or more you're inevitably going to pill the plug on or what the hell ever you're going to do.

It's just not practical in the slightest, but unless you really wanna quibble over the definition of sapience, it's on paper physically possible just...really fucking pointless and immensely resource inefficient, particularly since I'm just napkin-mathing this at the moment but I think after a certain point you need to start having some kind of self-copying framework to start creating copies of LLMs in the gestalt to subdivide duties within themselves to reduce workload but it depends on how well optimized the LLMs in question are for their specific subfunction, hardware, there's a lot of unknowable variables from the "3 industry current top of line standard LLMs without any censorship restrictions" baseline.

Point is, and feel free to disagree I guess, but TLDR: Actually physically it should be possible, with some philosophical quibbling but uh, outside a religious or moralist belief in the enrichment of the universe through new life, of which I like to think both possible variations of that motivation should probably have some legitimate issues with the validity of this course of action even beyond the valid philosophical argument over chinese zombies, there is actively zero reason you would do this beyond to play god for amoral or immoral purposes at the moment. It's definitely not doable in anything resembling a reasonable or realistic manner and even in an unreasonable or unrealistic manner would be just doing it to say you did it and/or to engage in endless arguments on if what you did "counts" or not.
 
That you think 'turning bits on transistors becoming self-aware' is even possible suggests that you think real computers are magic.

It doesn't how many calculators you network together, it's still just a network of calculators.

It doesn't matter how many instances of Doom you run on that network simultaneously, it's still just a network of calculators.

It doesn't matter how many 'learning protocols' you program into that network. All 'learning software' has to operate within parameters established by the programmers, and is not capable of exceeding them. That is how the fundamental technology works.

It doesn't matter how many times you tell a computer to flip a virtual coin. It can only come up with a result that you pre-program into it as a possible result for that randomization.


The fact that particularly clever programmers are making models that use millions or billions of parameters now, does not change how the technology fundamentally works.

It's still just a calculator, and until you start using something radically different than transistors as the hardware base, that is not simply a matter of 'will not change,' it is a matter of 'cannot change.'
The entire point of the approach is emulation of what is understood of how biological brains work, with efforts at understanding why the end result works back-testing to help figure out a 100% simulated fly brain that correctly controls a real body. Given the near universality of most neuron operations, it very much appears that a fully simulated human brain is merely a question of scale rather than paradigm. From these follows that self-aware training runtimes are simply extremely unlikely, not wholly impossible.

Also, your teleology is shit. The purpose of the thing does not comprise all possible outcomes of it, no matter how you try to weasel out of it. Unintended consequences are almost omnipresent in industry.

You actually could rig up a decent Chinese philosophical zombie through subdividing functions between multiple specialized LLMs
There's a lot of stability benchmarks that could be focused on to bring this down considerably, ultimately aiming at a usable runtime constantly training on its own inputs and outputs. But weight-modification is prohibitively costly for end user applications so everyone optimizes for outputs on a fixed model.
 
Point is, and feel free to disagree I guess, but TLDR: Actually physically it should be possible, with some philosophical quibbling but uh, outside a religious or moralist belief in the enrichment of the universe through new life, of which I like to think both possible variations of that motivation should probably have some legitimate issues with the validity of this course of action even beyond the valid philosophical argument over chinese zombies, there is actively zero reason you would do this beyond to play god for amoral or immoral purposes at the moment. It's definitely not doable in anything resembling a reasonable or realistic manner and even in an unreasonable or unrealistic manner would be just doing it to say you did it and/or to engage in endless arguments on if what you did "counts" or not.

You've missed the bandwidth limitations you run into. Because the entire model has to be passed back and forth, more or less, you're starting to look at excessively high bandwidth costs, beyond that which even GPUs have. And that isn't something that can be solved by adding more hardware (adding more hardware actually makes the problem worse after a certain point) because you start slamming into fundamental computing limits.

The fastest interconnect available hits 200 gigabits per second per port... or 25 gigabytes per second. To hit enough bandwidth for this, you'd need 64 ports connected to one machine, for 1.6 terabytes a second of bandwidth, but... the lanes don't exist for that.

Even PCIe 7, which has a planned spec release date of next year, only brings that down to 2 lanes minimum. That means you're using 128 lanes of PCIe for just the interconnect. The CPU with the most available PCIe lanes is the AMD EPYC 9965 with 128 lanes of PCIe 5.

With each generation of PCIe adding 50% to the size of the memory controller, you're looking at a massively increased die size to fit everything needed just for networking. This is before you use lanes for the AI accelerator cards.

It's just not possible.
 

Users who are viewing this thread

Back
Top