AI/Automation Megathread

Are you working your way towards promoting Universal Basic Income? That's normally where I see people take this logic chain. Once you've established that the money will always recirculate, the next step is making sure it goes first to ensuring that everybody has their basic needs met.
No.

UBI is what would, in fact, create the economic catastrophe that most people who are paranoid and hysterical about AI are trying to avoid.

AI is a tool that increases productivity (when used properly), meaning more value is added to the economy per man-hour of labor, and thus increases human prosperity.

UBI is paying people to do nothing, and thus turning everyone into dead-end absorbers of wealth. If you pay people to do nothing, you are taking them out of the pool of contribution to society's prosperity, and worse, welfare dependents as an overall class always end up becoming malcontents and vote farms for demagogues, meaning UBI would also degrade society morally, culturally, and politically.

Honestly, people using AI as a justification for UBI is the single biggest threat I see from AI right now.

The problem with AI as labor-saving tools is that they're so good/can get super good at certain things, that you don't need as many people to do those things anymore. And there's not enough people with cash who're willing to finance new companies to hire all those people who lost jobs, so you've got people who can do all sorts of things that can't get paid to do the things they're good at.
You are failing to understand a key aspect of how economics work.

When a game development company lays off a third, or even say two thirds, of its employees because they can get the same amount of work done with that many less people but using AI, that saves them expense. The value that the company adds to the economy is not 'how many people it employs,' but 'the game(s) it produces.' If it costs them 60 million to make a game instead of 180 million, but it brings in the same amount of revenue, let's say 500 million, the other 120 million of would-be costs does not just 'disappear' from the economy.

Instead the company does some combination of the following: pays dividends to shareholders, who spend or invest the money, hires on some more people so that it can make more projects at once, or invests in assets like office buildings, servers, advertising, etc.

Whichever way the money is spent, it still ends up spent, and continues to circulate through the economy. The fact that it's cheaper to make something does not mean jobs 'disappear.'

How do we know this is true?

Through the recorded effects of every other technological advancement in tools. Improved farming and steam engines didn't result in massive permanent unemployment, it just changed how people worked, and made whole new swathes of goods available to people.

Yes, this can really suck for people who've spent years or decades developing a skill that's now of little to no economic utility, there are specific groups that these transitions are hard on, but money doesn't just 'disappear' from the economy because a better tool makes production cheaper. It is, in fact, the other way around; a better tool making it cheaper to produce something adds value to the economy.


Note, I am purely talking about the fears of 'permanent unemployment' and the like. Not the many other issues that can come with changes to economic structure and relative balances of social power. There absolutely can be very serious problems with such things, but not because a tool makes humans permanently unemployable.
 
You are failing to understand a key aspect of how economics work.
As are you. There needs to be jobs for people to be payed to do, so if the ladders on knowledge-work and low-skill physical labor both get pulled up then that's such an enormous part of the economy the general population is cut off from outflows of that you can in fact end up with a long-term unemployment crisis. This is especially true as the decision-makers love to send the money into speculative stock trading that produces nothing at all so they can make money off having money without doing work.

The circulation does not magically find its way to the bottom where most of the people are. That was the entire point of coining "trickle-down economics" as an attack. And so in addition to a time-bomb of a competency crisis because all the introductory jobs got automated, you have enduring unemployability because the only skills large swaths of the population can get are inadequate for the jobs that remain.
 
As are you. There needs to be jobs for people to be payed to do, so if the ladders on knowledge-work and low-skill physical labor both get pulled up then that's such an enormous part of the economy the general population is cut off from outflows of that you can in fact end up with a long-term unemployment crisis. This is especially true as the decision-makers love to send the money into speculative stock trading that produces nothing at all so they can make money off having money without doing work.

The circulation does not magically find its way to the bottom where most of the people are. That was the entire point of coining "trickle-down economics" as an attack. And so in addition to a time-bomb of a competency crisis because all the introductory jobs got automated, you have enduring unemployability because the only skills large swaths of the population can get are inadequate for the jobs that remain.
The problem here is treating it like it's a static, unrecoverable condition.

There is a lot of money to be made in finding things you can teach otherwise unemployable people to do that are profitable.

And if we don't do something stupid, like create a permanent welfare dependency system, people will want jobs.

Also, cycles of speculation in the stock market only work the way they do because of the continuous influx of more investment money, the key portion of that investment money coming from long-term retirement plans. If you destroy the middle class, you destroy that option for 'making money' as well, so there's no point in throwing money at that anymore.


Sure, if we sustain an active and deliberate sabotage of the education system and deliberately make it impossible to train people for useful jobs, then if you combine that with AI several generations more advanced than what we have now, then perpetual high levels of unemployment might be in the works.

...But that's because you're combining several other factors, not because of AI in and of itself.

Development of better AI is a near-inevitable aspect of continuing technological development. It's the sort of thing that if one group doesn't do it, another will, at least so long as we don't see catastrophic civilizational collapse.

Having a perpetually sabotaged educational system, on the other hand, is not inevitable, it is not something that you can only prevent by crashing civilization, and it is also something that there have been large-scale efforts to overcome for literal generations now.
 
The problem here is treating it like it's a static, unrecoverable condition.
It doesn't need to be to send the global economy tumbling down under the sudden loss of fools with money to be parted from currently bearing the weight of it.

There is a lot of money to be made in finding things you can teach otherwise unemployable people to do that are profitable.
That requires massive alterations to multiple basic premises of how many of the remaining fields handle intake on both the educational and employer sides, which has to get done before the wave of unemployment rips away the consumer base sending businesses into freefall.

Also, cycles of speculation in the stock market only work the way they do because of the continuous influx of more investment money
Which the stock trading involved does not actually flow out at anything like the rate of wages, because most of that money is doing nothing but participating in a grand "Bigger Loser" game for years on end.

the key portion of that investment money coming from long-term retirement plans. If you destroy the middle class, you destroy that option for 'making money' as well, so there's no point in throwing money at that anymore.
Not when the reason for that is because that money stays in the hands of the stock speculators longer and arrives at them sooner and more often. And given the 2008 crash, they are in fact entirely able to fuck themselves over this way, so you need a plan for after that takes the financial system that would currently be expected to pay for the retraining with it.

Sure, if we sustain an active and deliberate sabotage of the education system and deliberately make it impossible to train people for useful jobs, then if you combine that with AI several generations more advanced than what we have now, then perpetual high levels of unemployment might be in the works.
I'm not saying it's fucked forever because people cannot learn to deal with it, I'm saying that because there's a massive difference in requirements and modern economics are absolutely littered with needs for enormous continuing cash flows the transition period keeping all the plates spinning is nonsensical. And should even one of the very many parts fail due to the employment gap, that failure will cascade into widespread economic problems.
 
No, you are still failing to understand.
No, you are just still living in fantasy.
1. Swiss banks do not just 'put money in a hole in the ground.' They invest like every other bank does. The fact that you think they don't just reveals your own ignorance.
Yes, I am aware of that. But these investments are done by lending money for projects. Which, again, if AI takes over project, money is going to the machines, not the people. Which, fundamentally, is no different from it sitting in the bank as far as the general populace is concerned.
2. AI are not people. AI are things. They are things that are owned by people. All wealth created by/through the use of AI is owned by a person. It does not matter how capable AI is, it is still, in the end, just a tool. Even if we did some day get to a level of recursive re-investment in AI to make more better AI and more better tools for AI, this is as far as economics is concerned, the same as a particularly skilled engineer having a lathe, 3D printer, and CNC machine in his garage, and using them to produce better tools for himself to use, including a better lathe, 3D printer, and CNC machine. They're still just tools.
AI are tools, yes. Tools that are and will be used to replace people in various roles, whether they are truly capable of it or not.

So what you will end up with 99% of the people being without work and having to live on government handouts, while 1% basically enjoy the machine-created utopia.
No it isn't, all tools and automation have replaced human labor AI is no different.
Yes, it is. And I have already explained how.

When agriculture constricted, workforce went into industry.
When industry constricted, workforce went into the services and "intellectual" sector.
But when that constricts, what is left?
 
Which, again, if AI takes over project, money is going to the machines, not the people.
Well, the people who own the machines, but the issue regarding the general population being cut off from a rapidly-increasing share of the economy remain.

whether they are truly capable of it or not.
And that isn't even necessary for it to collapse industries! Just losing the money from the high-payed specialists in a short enough timespan can easily lead to bankruptcies of low-margin middle-man businesses, which can readily cascade into general economic problems. Things are interdependent in ways not seen since the Bronze Age Collapse so basically any structural change is a major systematic risk.

But when that constricts, what is left?
There actually is a remarkably robust "attention economy" that could support a surprisingly large portion of the population, and there's still a lot of high-variability areas the current AI methodologies are absolutely horrible for, the issue is that even if there's enough jobs to avoid a permanent unemployment crisis it only takes a year or two for a lot of shit to break in ways that could take decades to sort out.
 
There actually is a remarkably robust "attention economy" that could support a surprisingly large portion of the population,
No, it won't.

It can support the people who can be (performatively) extroverted and charismatic to draw a following, or pump out enough content to ensure they get noticed, but that excludes a whole lot of people who are more comfortable working at a slower pace in the background.

It also doesn't help that copyright makes getting attention and money through the use of older IPs stupid hard. You need avenues for people to prove their competence/get training without A) getting sued into oblivion for using old as dirt stuff*, B) a stupid hard process of greenlighting projects, or C) making high risk monetary investments without guaranteed sales/jobs.

*Thanks Supreme Court of the US for rescinding the statute of limitations on copyright infringement!
 
No.

UBI is what would, in fact, create the economic catastrophe that most people who are paranoid and hysterical about AI are trying to avoid.

AI is a tool that increases productivity (when used properly), meaning more value is added to the economy per man-hour of labor, and thus increases human prosperity.

UBI is paying people to do nothing, and thus turning everyone into dead-end absorbers of wealth. If you pay people to do nothing, you are taking them out of the pool of contribution to society's prosperity, and worse, welfare dependents as an overall class always end up becoming malcontents and vote farms for demagogues, meaning UBI would also degrade society morally, culturally, and politically.

Honestly, people using AI as a justification for UBI is the single biggest threat I see from AI right now.


You are failing to understand a key aspect of how economics work.

When a game development company lays off a third, or even say two thirds, of its employees because they can get the same amount of work done with that many less people but using AI, that saves them expense. The value that the company adds to the economy is not 'how many people it employs,' but 'the game(s) it produces.' If it costs them 60 million to make a game instead of 180 million, but it brings in the same amount of revenue, let's say 500 million, the other 120 million of would-be costs does not just 'disappear' from the economy.

Instead the company does some combination of the following: pays dividends to shareholders, who spend or invest the money, hires on some more people so that it can make more projects at once, or invests in assets like office buildings, servers, advertising, etc.

Whichever way the money is spent, it still ends up spent, and continues to circulate through the economy. The fact that it's cheaper to make something does not mean jobs 'disappear.'

How do we know this is true?

Through the recorded effects of every other technological advancement in tools. Improved farming and steam engines didn't result in massive permanent unemployment, it just changed how people worked, and made whole new swathes of goods available to people.

Yes, this can really suck for people who've spent years or decades developing a skill that's now of little to no economic utility, there are specific groups that these transitions are hard on, but money doesn't just 'disappear' from the economy because a better tool makes production cheaper. It is, in fact, the other way around; a better tool making it cheaper to produce something adds value to the economy.


Note, I am purely talking about the fears of 'permanent unemployment' and the like. Not the many other issues that can come with changes to economic structure and relative balances of social power. There absolutely can be very serious problems with such things, but not because a tool makes humans permanently unemployable.
So, you follow the Labor Theory of Value and feel that it's a problem if people don't labor for their income?
 
Well, the people who own the machines, but the issue regarding the general population being cut off from a rapidly-increasing share of the economy remain.
Yep, and that is the issue.
There actually is a remarkably robust "attention economy" that could support a surprisingly large portion of the population, and there's still a lot of high-variability areas the current AI methodologies are absolutely horrible for, the issue is that even if there's enough jobs to avoid a permanent unemployment crisis it only takes a year or two for a lot of shit to break in ways that could take decades to sort out.
Agreed, although even the "attention economy" can actually be replaced by the AI.
 
So, you follow the Labor Theory of Value and feel that it's a problem if people don't labor for their income?
...No?

Where on earth are you pulling that I follow the labor theory of value?

From a purely economical standpoint, each product or service is functionally worth what its purchaser is willing to pay for it. When we consider non-economic factors, there's certainly reasons beyond just cost of production a seller might not be willing to sell for what a purchaser is willing to buy for, but when that's the case, you can't make a living producing that product.

And of course there's problems when people don't work for their income. There's basically only a handful of circumstances where people have money they didn't work for.

1. Theft. This is bad.
2. Lottery/luck. This is less bad, but big lottery winners almost always blow the money in a relatively short period of time, and tons of people waste their money trying to be the 'lucky winner' too.
3. Inheritance. This is the least bad, but the 'idle rich' aren't exactly famous for their contributions to society. There are exceptions of course, but while people inheriting enough money to, and then choosing to, live idly off of the labor of their parents and ancestors isn't socially constructive, it's certainly better than the government seizing it via death tax.

Most people don't inherit enough money to live an idle lifestyle off of, of course; my grandmother died a few years ago, and as per her will, her children divided her estate, including the value of her house after it was sold. All of her children were fairly old themselves by that point, one of them retired, and it hardly gave them what they needed to live off of without needing to work, but my family on both sides are long-running middle class, so that's hardly surprising.


All the rest of the arguments on this thread just loop back to the same ones as before.

"This time it will be different!"

"No, it won't. There's a long history of revolutionary technological changes having similar effects on economics on the larger scale, as evidenced from the 1800's on, with examples of X, Y, and Z."

"No, AI is so radical, so revolutionary, it will be different!"

"AI is no more revolutionary than the steam engine, internal combustion engine,, radio, or printed circuit boards, as far as economics is concerned."

"No, it totally is, see this hype article about how incredible it is?"

"Hype is just hype. AI has distinct limitations determined by the basic hardware and logic structures it operates on."

"You're ignorant."


This seems to be the repeated feedback loop of these arguments, and frankly, I'm getting sick of rehashing it to people who just do not care to listen. I'll give at least some credibility to people who are more worried about its negative effects in concert with other things that are significant problems in themselves. Those fears have at least some grounding, but people thinking that AI will result in things like 99% unemployment are just being flatly hysterical.

I'll leave the thread for now with this research paper on the limitations of AI, and its conclusion:


"In this paper, we systematically examine frontier Large Reasoning Models (LRMs) through the lens
of problem complexity using controllable puzzle environments. Our findings reveal fundamental
limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to
develop generalizable reasoning capabilities beyond certain complexity thresholds. We identified
three distinct reasoning regimes: standard LLMs outperform LRMs at low complexity, LRMs excel at
moderate complexity, and both collapse at high complexity. Particularly concerning is the counterin-
tuitive reduction in reasoning effort as problems approach critical complexity, suggesting an inherent
compute scaling limit in LRMs. Our detailed analysis of reasoning traces further exposed complexity-
dependent reasoning patterns, from inefficient "overthinking" on simpler problems to complete failure
on complex ones. These insights challenge prevailing assumptions about LRM capabilities and
suggest that current approaches may be encountering fundamental barriers to generalizable reasoning.
Finally, we presented some surprising results on LRMs that lead to several open questions for future
work. Most notably, we observed their limitations in performing exact computation; for example,
when we provided the solution algorithm for the Tower of Hanoi to the models, their performance
on this puzzle did not improve. Moreover, investigating the first failure move of the models revealed
surprising behaviors. For instance, they could perform up to 100 correct moves in the Tower of
Hanoi but fail to provide more than 5 correct moves in the River Crossing puzzle. We believe our
results can pave the way for future investigations into the reasoning capabilities of these systems."
 
No.

UBI is what would, in fact, create the economic catastrophe that most people who are paranoid and hysterical about AI are trying to avoid.

AI is a tool that increases productivity (when used properly), meaning more value is added to the economy per man-hour of labor, and thus increases human prosperity.

UBI is paying people to do nothing, and thus turning everyone into dead-end absorbers of wealth. If you pay people to do nothing, you are taking them out of the pool of contribution to society's prosperity, and worse, welfare dependents as an overall class always end up becoming malcontents and vote farms for demagogues, meaning UBI would also degrade society morally, culturally, and politically.

Honestly, people using AI as a justification for UBI is the single biggest threat I see from AI right now.
Disagreement, UBI isn't bad because it'll lead to abstract concepts of "degradation" and "vote-buying" but because it's unenforceable. Without the threat of labor strikes or revolution, the former working class of UBI consumers have no leverage if/when the ruling oligarchy inevitably decide to cut the UBI, assuming that the surveillance state and violence and consequentially the monopoly of force have already been automated.

It's not a stable equilibrium.
But when that constricts, what is left?
Eneasz Brodski proposed that with advances in biotechnology, economically redundant working-class men can become the transgender gold-digging trophy wives of the robotics company executives and I saved his argument as proof of my theory that there is no depth to which "bootstraps" won't sink.

But more seriously, the plan is pretty obviously for the economically redundant to die. That the oligarchs will scurry off to their bunkers and wait for everyone their machines rendered unemployed and unemployable to starve to death, then come out and enjoy post-scarcity utopia. Or take Jay Gould's advice and start a World War to conscript everyone else into a meatgrinder.
So, you follow the Labor Theory of Value and feel that it's a problem if people don't labor for their income?
AI doesn't threaten capitalism as a system, it threatens your ability to use your labor to earn capital and participate in said system. Capitalism would fine, it'd just be everyone who wasn't already a multimillionaire robotics company executive who'd be screwed.
Best historical point of comparison would probably be the Highland Clearances genocide. Sheep ranching was a more profitable use of land than the output of famlies of peasent farmers, so the peasents lost their land and starved.
Charles Trevelyan said:
Thomas Malthus said:
 
Again, just taking 'humans will be totally outcompeted by AI economically' as a given.

Complete hogwash. A tool is a tool; five people with good tools might be able to outperform 50 people with vastly inferior tools, but a tool is still a tool.

Might as well say that 'given that humans have been totally outcompeted by tractors,' and write the human race off a hundred years ago.
 
Again, just taking 'humans will be totally outcompeted by AI economically' as a given.
Well, when you consider most corporations are shareholder value min-max machines that routinely engage in firing to bolster investor confidence in the leadership, why wouldn't you replace humans with AI when at all possible?

Especially when factoring in all the additional regulatory compliance burdens each individual human employee adds?
 
Well, when you consider most corporations are shareholder value min-max machines that routinely engage in firing to bolster investor confidence in the leadership, why wouldn't you replace humans with AI when at all possible?

Especially when factoring in all the additional regulatory compliance burdens each individual human employee adds?
The same reason those companies shouldn't be sacking all their senior, experienced employees, in exchange for much younger employees who can't command as high a wage.

Experience matters. Institutional wisdom matters. Performance drops when you lose these things.

The problem though, is that the Professional Managerial Class is institutionally incompetent, and does not understand the value of these things, nor want to understand said value. We're already seeing another gradual generational turnover as some older corporations become decreasingly competent and effective at their jobs, because they are run by MBAs who are good at speaking corporate slang and giving the right in-group signals, rather than good at leading, and care about the latest corporate trend, like DEI, far more than actually making a good product or retaining a competent worker base.

Increasingly, the majority of management has no idea how the basic functions of the businesses they manage work. They do not understand what makes a good employee or a bad employee, just what are the 'correct' social signals for such. They also have absolutely no technical competency with understanding what AI can or cannot do, and what it is or is not good for.

These people have, and will continue to, grind their own companies down further in further in search of 'next quarter' profit reports, rather than long term investment in the viability of their company's workforce, or things like consumer trust.


Maliciously incompetent humans I see as a far larger threat than AI. After all, they are the cause of every modern famine, BS like the covid lockdowns, and an immense list of other economic and human catastrophes.
 
...No?

Where on earth are you pulling that I follow the labor theory of value?

From a purely economical standpoint, each product or service is functionally worth what its purchaser is willing to pay for it. When we consider non-economic factors, there's certainly reasons beyond just cost of production a seller might not be willing to sell for what a purchaser is willing to buy for, but when that's the case, you can't make a living producing that product.

And of course there's problems when people don't work for their income. There's basically only a handful of circumstances where people have money they didn't work for.

1. Theft. This is bad.
2. Lottery/luck. This is less bad, but big lottery winners almost always blow the money in a relatively short period of time, and tons of people waste their money trying to be the 'lucky winner' too.
3. Inheritance. This is the least bad, but the 'idle rich' aren't exactly famous for their contributions to society. There are exceptions of course, but while people inheriting enough money to, and then choosing to, live idly off of the labor of their parents and ancestors isn't socially constructive, it's certainly better than the government seizing it via death tax.
I'm trying to figure out how you think and what framework you're using. I can't really see where you're coming from and your position doesn't seem coherent. This could just be me not understanding or reading too much into your word choices.

But honestly it's hard not to extrapolate that you're endorsing Labor Theory. You feel it's fine for over 90% of jobs to be lost and all the means of production to be in the hands of techno-oligarchs, because they would then use all the money to create what amounts to a command economy. However because they might not labor, not an assumption you have proven, you feel that it would be wrong for the common folk to have such resources. The fact that the techno-oligarchs you're endorsing taking control of the economy are mostly strongly left-leaning if not openly socialist suggests you agree with their philosophies, otherwise I would expect you to be hesitant about handing them overwhelming wealth and control of the entire economy. It is completely inconsistent for you to hold the position that it's fine for all the money to be concentrated among oligarchs as long as money is being spent, but also not fine for poor people to have it, unless you endorse the Labor Theory of Value.

My impression is that you're a die-hard Marxist who read some of Thomas Sowell's writings and is trying to incorporate them into a Marxist framework. As an example, you left charity out of your list of ways people obtain money which is typical of far leftists, especially Marxists, who prefer government assistance and consider private charity ineffective. On the other hand I presume, possibly in error, that taxes are not on the list because you call taxes theft, a Libertarian position. You classify goods and services as products and services, this word formation is normal for a person following the Labor Theory of Value since a natural good that didn't require labor to create doesn't fit into that framework. The idea that it's fine if 93% of jobs are lost and all the money goes to a few techno-oligarchs because they will then create a command economy and employ people's labor in some unspecified way is also very Marxist. Or, this could be Horseshoe theory in action and you're leaning so far right you don't realize you've looped around to endorsing socialist positions from the other side.

I haven't managed to wrap my head around the base assumptions and framework you're using and am trying to get clarification.

I'm was going to say I'm sorry you're leaving but it seems like you didn't mean it.
 
Experience matters. Institutional wisdom matters. Performance drops when you lose these things.
The problem is that a large enough part of the economy is knowledge-work the current models would improve performance on that widespread deployment will cause some manner of harsh economic crash from the scale of sudden break in the currency flows. It's not something that takes effect gradually the way iterative improvement and up-front expense introduced tractors to one farm at a time, it's already technologically plausible to put millions of people in the middle class out of work overnight.
 
I haven't managed to wrap my head around the base assumptions and framework you're using and am trying to get clarification.

I'm was going to say I'm sorry you're leaving but it seems like you didn't mean it.
It's rare to have an actual new poster on this forum, and it's clear you're not familiar with any of my ideology or positions because of that. That in and of itself is fine, but you're also putting words in my mouth, though at least not being rude about it, so I'll cut you some slack.

1. I am not a marxist of any stripe. I'm about as far from a marxist as is possible without going into lolbertarian territory.

2. I do not believe that AI is going to result in something like 93% unemployment. Current paradigms of AI I think will, at most, replace something like 30% of jobs, and I think 10% is much more likely. The sort of thing an economy could certainly experience growing pains from shifting around, but not the catastrophic oligarchification so many people are predicting.

3. You putting words in my mouth is you assuming I believe these catastrophized predictions of what AI will do, and am okay with this catastrophized outcome. I'm not. I don't think it's going to happen, and based on actually having a pretty good idea how the base methodology of the AI works and how history has worked with new tools in the past, I'm very confident that my prediction of what AI will do is more accurate.

4. I did not include charity in my list of 'ways people have unearned money' because I was thinking in terms of 'replacement for earning a living,' and charity is not intentionally given in quantities sufficient to 'make a living' off of. Some grifters actually make pretty big income this way, but that's not ethical, and if the people they're getting money from knew it, they'd not give them so much, or possibly anything at all.

5. I'm as prone to getting sucked back into arguments as any chronic debate forum participant; somebody engaging in at least a partially new argument or thread of discussion is certainly something that can do that. And I have to say, I think this is one of the first times anyone has ever accused me of being a marxist, of all things.


Also, another tidbit on how AI is vastly less capable than catastrophizers think:

 
2. I do not believe that AI is going to result in something like 93% unemployment. Current paradigms of AI I think will, at most, replace something like 30% of jobs, and I think 10% is much more likely. The sort of thing an economy could certainly experience growing pains from shifting around, but not the catastrophic oligarchification so many people are predicting.
Then you are blending two different arguments together.

If AI replaces 10% of jobs nothing will happen and the entire discussion is completely moot.

You are replying to people who are saying "how does society deal with 90%+ of jobs being replaced with AI" and your response is to act like they are saying "here is how we should respond to 10% of jobs being replaced by AI" because you believe they are wrong about the 90%+ figure.

But you shoud not do that. You should make two seperate arguments.

Argument 1: You are wrong about 90%+ of jobs being replaced. "I" (actually you) think it will be 10%.

Argument 2: (optional. you don't have to get involved). here is what I think will happen / should happen if AI replaces 90%+ of jobs
 
Then you are blending two different arguments together.

If AI replaces 10% of jobs nothing will happen and the entire discussion is completely moot.

You are replying to people who are saying "how does society deal with 90%+ of jobs being replaced with AI" and your response is to act like they are saying "here is how we should respond to 10% of jobs being replaced by AI" because you believe they are wrong about the 90%+ figure.

But you shoud not do that. You should make two seperate arguments.

Argument 1: You are wrong about 90%+ of jobs being replaced. "I" (actually you) think it will be 10%.

Argument 2: (optional. you don't have to get involved). here is what I think will happen / should happen if AI replaces 90%+ of jobs
...I have made both of these arguments. This latest series of debate I primarily was focusing on why even if AI pushed 90% of people out of current work, the way economics work more jobs would be created for them, so unemployment would not be permanent. That argument was framed in an 'if' format, and last page, on Wednesday, I specifically stated that I thought 10% was a far closer to realistic number.

This also is not the first time I have addressed these kinds of subjects on this thread. Even if someone has not read those older posts though, It's literally on the prior page.
 
...I have made both of these arguments. This latest series of debate I primarily was focusing on why even if AI pushed 90% of people out of current work, the way economics work more jobs would be created for them
why would those new jobs be done by humans instead of more AI?
honestly the only jobs humans would have at some point is "AI supervisor".

this has the potential to be really good, or really bad.
depending on how society handles it.

The thing is, I expect different countries to handle it in different ways. some of which would be really stupid.
 

Users who are viewing this thread

Back
Top