And the same can be said of a child as well.
When I was a toddler I drew people as heads with sticks for legs and hands.
Believe me, none of this really clarifies the difference.
I don't know. To me, while the outcome may be similar I'm not sure the reason behind it is the same. A child understands he has a body and isn't just a head with sticks for limbs rather that abstract image is the closest he can approximate the truth as he sees it.
While an Ai has no understanding, at least not at the current time, and is just looking at a dataset of images and trying to take the average of all of them.
Yeah, what
@Crom's Black Blade points out gets to the essential matter here. Apply Aristotelian logic to the problem, and you see where the comparison to a child goes wrong. In fact, it illustrates the difference. A mind (or an imitation thereof) is not a static image. It's a dynamic, we might say...
living... thing.
A child is a human and has the categorical potential of a human. It can develop into an adult. The fact that an infant cannot yet do the things that an adult can is merely an issue an
actualisation. The
potential is there.
If I leave ChatGPT running for long enough, will it bootstrap itself into a real mind? No. It can't do that. The underlying requirements aren't there. It's a data processor with an admittedly impressive ability for recombination. But none of it happens consciously. It doesn't gain
understanding. There's no mind there. Nothing that can "grow up". That's the issue with current "AI" (which isn't really "AI" at all, and most people involved in developing will -- and
do -- tell you that.)
Real AI would require a completely different approach. Instead of imitating the
output of a mind (without any underlying consciousness), such an approach would start by imitating the most rudimentary and primitive parts of the brain. And some of that has been tried. You know... creating robot ants that do everything an ant does. Start there. Then build a robo-termite. A robo-bee. Figure out how you get greater complexity. Work your way towards artificial lizard brains... mouse brains...
That's the road. It's long, and it has far fewer options for a quick buck on reasonably short notice. Which is one reason why AI does not, in any way, exist in the world. Only stuff like ChatGPT... which is
not AI.
...It's still wholly irrelevant to this thread, though.
ETA: coincidentally, someone just posted in the
AI/Automation megathread, so I can now link to it here. That's probably the right place for the continuation of this particular discussion.
----------------------------------------------
we could absolutely use a discussion of What Inventions Could Break Marcohistoric Prediction Models and their likely consequences.
I'd argue we've already got one in nuclear weapons and MAD deterrence. A nation can be as decadent and incompetently governed as it likes and still avoid outright invasion and conquest so long as they've got an explicit Sampson Option policy and the Bomb.
I strongly disagree with the assessment of atomics. They provide no paradigm shift.
Macro-history applies to human civilisation as it came into existence thanks to the neolithic revolution. WE moved beyond hunter-gatherers, into organised societies with formalised divisions of labour and econoies based on abstractions of pure exchange (i.e. monetary systems). In this context, the main socio-eonomic driver was and remains the division of scarce goods, and all models of societal organisation are ultimately centred around that concept.
Something that doesn't challenge that paradigm also doesn't challenge the preconceptions of macro-history. And although it may be ironic (and in fact mentally unbearable to J.P. Sartre, but that's another story), the possibility of annihiliation in nuclear fire is by no means a challenge to the paradigm. After all, a tribe in 3000 BC ould be antirely eradicated by enemies wielding nothing but heavy clubs. That, too, is existential termination to
their entire "world". And since human operational perception has a limited horizon, there is no essential mental difference between your tribe getting wiped out to a man via the use of clubs... nd all humanity being wipid out to a man via the use of nukes.
Mankind, I regret to inform all optimists, has no "species-feeling". Spengler wrote of "race-feeling", but at best, his approximation thereof in practical terms was a general sense of vaguely-defined nationalism on the grandest functional scale. Such as: "
I'm a Briton, and I feel proud of The Empire [... even though I never consciously conceptualise it on its true scale and in its true complexity... because doing so would literally drive most humans insane.]"
My point is: if humans were perfectly rational, always intelligent beings with a universal degree of higher education on an academic level across the board... then perhaps, the existence of nukes and the true implications thereof might prompt a shift in societal thinking to such a degree that it would affect the shape of history.
But this is not the case, and so it doesn't. History remains as it was. Nukes are just bigger clubs, and the underlying dynamics are unaltered.
I imagine other inventions could be:
- Centrally monopolized AI rendering human labor worthless and giving the owners of the robot armies an invincible eternal monopoly of force.
- Open-source AI replacing labor but it doesn't matter if everyone's got their very own von neumann factory and defensive robot armies.
- Uncontrolled indifferent AI, aka the Paperclip Maximizer and humanity unceremoniously going extinct.
- Uncontrolled benevolent AI, aka humanity becoming the Omnissiah’s pets.
- Transhumanism and the removal of free will, aka ketracel white addiction-backed hydraulic empires, subscription service cybernetics and everything *THASF* goes on about in his spartacasts. *
- Transhumanism and the creation of a genuinely actually biologically superior Master Race.
- Medical immortality. Empires no longer grow decadent with time when their founders can potentially live forever. Overpopulation and zero-sum competition for resources, jobs and inheritance are now Serious Business.
The distinctions between AI are irrelevant. Any true AI, by its mere existence, would render the existing macro-historical model invalid. Macro-history is concerned with the behaviour of humans on a large scale. In inhuman sapience cannot be predicted by the model.
Removal of free will would obviously do the same, because it would remove the "mass scale" component. It would reduce human action to merely an exponent of the will of a small group-- a group so small that it can't be predicted for. (Macro-history relying on the predictability of averages, after all.)
All significant transhumanism, regardless of free will, would do the trick anyway. Same reason as with AI. A transhuman is not human, and can't be expected to operate in the same way. The model ceases to be valid. (Do note that the deviation needs to be significant. "We removed nearsightedness and male patter baldness" would
not qualify.)
Arrival of sapient aliens, ditto.
Immortality also, yes. Or even significant life extension. Not just for the material reasons, but because a vastly longer (expected) lifespan would alter the mentality of the people as well. Thus, again, altering the general behaviour... on which macro-history is predicated.
A final option is post-scarcity, which would invalidate the post-neolithic paradigm in which we've existed, and on which the very concept of human civilisation (and thus alo "history") is based. If the division of scarce goods ceases to be a concern, the paradigm is also ended.
Another option would be that the caesar figure and reactionary backlash get their start by being explicitly against these sorts of things and it becomes society's
new foundational myth. That the technocrats of the 21st century wanted to render everyone obsolete and were willing to risk humanity's extinction for the chance to do so**, whereas modern society, while authoritarian, on some level depends upon consent of the governed, if enough people disliked the status quo, they could overthrow it. ***
*Butlerian Jihad intensifies*
----------------------------------------------
I mean ignorance is one thing, but a few of them do have an understanding of history which doesn't factor into their decision making at all. It's as if they flatly refuse to try to understand how things work and how to play the game, out of spite.
Thus for all the world they appear utterly foolish, if not thuggish, which is a political kiss of death.
We have a long way to go, and there are no shortcuts.
That's about the most succinct way I can put it.
----------------------------------------------
I don’t think any Persian Shahanshah or Chinese Huangdi have ever proclaimed themselves to be Gods. Perhaps of divine descent, but never outright Gods and certainly not as being above criticism. Any Great King/Emperor who relentlessly shat on his satraps or governors was not long for this world.
Precisely. It should be noted that Alexander was so ludicrously successful in large part because the rulers of Persia had just gone through a succession strife, and the winners had become very paranoid... and had thus deliberately fleeced their satraps, while hoarding gold in the own imperial treasury. Which is... retard economics. Literally acting like a deranged dragon from myth, you know? It didn't end well.
But, yeah, Alexander won because these guys were doing something that was not normal, and not acceptable to the provincial rulers. Which demonstrates aptly that a degree of decentralism (in fact, a rather significant degree!) had been the norm.
The "Eastern absolute despot" meme as we now know it comes from... (drumroll please)... Karl Marx.
Yup. Him again. So that tells you something. Not that he originated the basic idea. The Greeks demonised the Persians, and it happens that the Muslim rivals of Christendom were also "Eastern". This doesn't make the meme true. However, ATP is from Poland, and recent experience with the USSR has extended the lifespan of the meme in those parts of the world. Understandably so, I'd say. It's still a meme, though, and we shouldn't confuse it with a more complex reality.