But farmers DID get outcompeted by tractors. It is just that industry required manual labor instead.
Again:
- when agriculture got more efficient, workforce shifted to industry
- when industry got more efficient, workforce shifted to intellectual work
- when intelllectual work gets "more efficient", workforce will shift... where? To proletariat?
That is the issue. AI, once it develops past a certain point, leaves no room for humans to run to.
As for the actual capability of AI... yes, current AI is a hallucinogenic mess. But even that is not stopping the companies from pushing AI. What will happen when AI improves?
AI is not exactly new technology in and by itself, but adaptive AI is. We are yet to see where it will go. Sure, it is possible that an AI capable of replacing humans in certain roles will have to undergo the same 20+ years learning process humans do... but then again, it may not.
Here you use "but AI run a shop badly" as a "gotcha", while ignoring the fact that it...
ran the bloody shop. Just a year or two ago, even proposing such an idea will have been completely ridiculous.
Issue with Claude was not the concept, but implementation. If you hire a teenager to run a shop, that is still a person who has 18 years of experience of dealing with other humans and their idiosyncrasies. Claude was, experience-wise, literally a baby. It is like giving Congress a task of running a for-profit company.
But there are ways around it.
And here were are, just making the same circle again. I went back and looked at the start of the thread, and literally my first argument presented here still has not been answered, no matter how many times I present it.
AI is not magic. It's a mathematical process. It does not have unlimited growth, it is not
capable of learning in the same way as humans. No matter how much current-architecture AI is improved, it will
never be able to do many things, because that is literally baked into what it is.
You can keep saying 'but what will it be in the future?' But this is basically a paean to science fiction, with no actual substance or meaning. Until you can actually parse reality from fictional dreams, you're not going to be able to engage with this issue in a constructive way.
The 'AI runs a shop badly' isn't just about 'it does it badly,' but
why it does it badly. The AI doesn't just make mistakes, it fundamentally fails to understand what it is supposed to do, and starts posting hallucinatory delusional messages. The reason for this is because LLM AI has no understanding of
value, or
meaning, literally all that it does and can do is recognize patterns, then regurgitate things it sees in patterns back to you. It has no idea what any of those patterns
mean.
Further, you're wrong about the idea of AI running a shop being bloody ridiculous two years ago. Someone could have
easily written a dedicated program to run a shop
twenty years ago, and it would have done it better. Rather than trying to use the LLM-based 'reasoning' as a one size fits all solution, a coder would have defined a set number of things the program could and could not do, write the code to enable it to interact with vendors, set prices, make and receive payments, etc, and then set basic parameters so that it was not selling products at a loss.
It would not be 'full service automation,' you'd need someone to check in on it every once in a while, and code in new solutions to problems not initially thought of, but it
absolutely could have been done. The basic code capability to do such things existed in the 90's, but the internet infrastructure needed wouldn't have.
To make this clearer to you, not only has technology to do something like this existed since long before the attention-grabbing modern LLM AI existed, other models to do it on were
better, they just don't have the hype that people have put around
modern AI, you just didn't
know that it could be done, because it didn't have all the hype.
The fact that all these people presenting hysterical arguments about AI permanently unemploying 99% of people
don't know basic facts about what technology was already capable of is one of the reasons I don't take such arguments seriously.
Let’s put it this way:
Given how terrible the job market is on top of everything else, do we really want to risk adding AI on top of all the things that could/can go wrong?
Unfortunately for this line of thought, AI continuing to be developed is not something we have much choice on.
In the first place, there is no wide-spread move against AI. You wouldn't be able to stop this through a political movement any time in the near future, and as AI continues to fail to live up to apocalyptic predictions, that trend will continue.
In the second place, even if you
did somehow have a large-scale political movement magically appear out of nowhere, let's say 75% of Americans get hardcore behind stopping AI development, that still wouldn't meaningfully be able to stop it within the US. It's something that anyone moderately rich with a few server clusters can continue development of, though it'd take someone
seriously rich to continue it at the same pace it is currently moving at. Trying to enforce a ban would be absurdly difficult.
In the third place, that's just trying to stop it in the US. You'd need another magical appearance of anti-AI movement in every western country to stop it in all of them, but then it''d just continue in dictatorships, who would be happy to keep working on such things to try to gain an advantage over the West.
In short, stopping such things isn't going to happen. It's a good thing all the hysterical predictions are wrong, because if they
were correct, we'd be fairly screwed, since this is a train that isn't going to be stopped by anything short of a catastrophe that crashes the internet semi-permanently.