AI/Automation Megathread

Again, just taking 'humans will be totally outcompeted by AI economically' as a given.

Complete hogwash. A tool is a tool; five people with good tools might be able to outperform 50 people with vastly inferior tools, but a tool is still a tool.

Might as well say that 'given that humans have been totally outcompeted by tractors,' and write the human race off a hundred years ago.
But farmers DID get outcompeted by tractors. It is just that industry required manual labor instead.

Again:
- when agriculture got more efficient, workforce shifted to industry
- when industry got more efficient, workforce shifted to intellectual work
- when intelllectual work gets "more efficient", workforce will shift... where? To proletariat?

That is the issue. AI, once it develops past a certain point, leaves no room for humans to run to.

As for the actual capability of AI... yes, current AI is a hallucinogenic mess. But even that is not stopping the companies from pushing AI. What will happen when AI improves?

AI is not exactly new technology in and by itself, but adaptive AI is. We are yet to see where it will go. Sure, it is possible that an AI capable of replacing humans in certain roles will have to undergo the same 20+ years learning process humans do... but then again, it may not.

Here you use "but AI run a shop badly" as a "gotcha", while ignoring the fact that it... ran the bloody shop. Just a year or two ago, even proposing such an idea will have been completely ridiculous.

Issue with Claude was not the concept, but implementation. If you hire a teenager to run a shop, that is still a person who has 18 years of experience of dealing with other humans and their idiosyncrasies. Claude was, experience-wise, literally a baby. It is like giving Congress a task of running a for-profit company.

But there are ways around it.
 
One of the most impressive things I have seen AI do is program


I literally saw it fix bugs. and I reviewed the code and it was good.
it won't always be good. still needs human supervision.
but you need fewer and fewer humans every day.

AI also used to be completely unable to draw teeth and hands. now it can.
it is getting better really fast.
 
Let’s put it this way:
Given how terrible the job market is on top of everything else, do we really want to risk adding AI on top of all the things that could/can go wrong?
 
But farmers DID get outcompeted by tractors. It is just that industry required manual labor instead.

Again:
- when agriculture got more efficient, workforce shifted to industry
- when industry got more efficient, workforce shifted to intellectual work
- when intelllectual work gets "more efficient", workforce will shift... where? To proletariat?

That is the issue. AI, once it develops past a certain point, leaves no room for humans to run to.

As for the actual capability of AI... yes, current AI is a hallucinogenic mess. But even that is not stopping the companies from pushing AI. What will happen when AI improves?

AI is not exactly new technology in and by itself, but adaptive AI is. We are yet to see where it will go. Sure, it is possible that an AI capable of replacing humans in certain roles will have to undergo the same 20+ years learning process humans do... but then again, it may not.

Here you use "but AI run a shop badly" as a "gotcha", while ignoring the fact that it... ran the bloody shop. Just a year or two ago, even proposing such an idea will have been completely ridiculous.

Issue with Claude was not the concept, but implementation. If you hire a teenager to run a shop, that is still a person who has 18 years of experience of dealing with other humans and their idiosyncrasies. Claude was, experience-wise, literally a baby. It is like giving Congress a task of running a for-profit company.

But there are ways around it.
And here were are, just making the same circle again. I went back and looked at the start of the thread, and literally my first argument presented here still has not been answered, no matter how many times I present it.

AI is not magic. It's a mathematical process. It does not have unlimited growth, it is not capable of learning in the same way as humans. No matter how much current-architecture AI is improved, it will never be able to do many things, because that is literally baked into what it is.

You can keep saying 'but what will it be in the future?' But this is basically a paean to science fiction, with no actual substance or meaning. Until you can actually parse reality from fictional dreams, you're not going to be able to engage with this issue in a constructive way.

The 'AI runs a shop badly' isn't just about 'it does it badly,' but why it does it badly. The AI doesn't just make mistakes, it fundamentally fails to understand what it is supposed to do, and starts posting hallucinatory delusional messages. The reason for this is because LLM AI has no understanding of value, or meaning, literally all that it does and can do is recognize patterns, then regurgitate things it sees in patterns back to you. It has no idea what any of those patterns mean.


Further, you're wrong about the idea of AI running a shop being bloody ridiculous two years ago. Someone could have easily written a dedicated program to run a shop twenty years ago, and it would have done it better. Rather than trying to use the LLM-based 'reasoning' as a one size fits all solution, a coder would have defined a set number of things the program could and could not do, write the code to enable it to interact with vendors, set prices, make and receive payments, etc, and then set basic parameters so that it was not selling products at a loss.

It would not be 'full service automation,' you'd need someone to check in on it every once in a while, and code in new solutions to problems not initially thought of, but it absolutely could have been done. The basic code capability to do such things existed in the 90's, but the internet infrastructure needed wouldn't have.

To make this clearer to you, not only has technology to do something like this existed since long before the attention-grabbing modern LLM AI existed, other models to do it on were better, they just don't have the hype that people have put around modern AI, you just didn't know that it could be done, because it didn't have all the hype.


The fact that all these people presenting hysterical arguments about AI permanently unemploying 99% of people don't know basic facts about what technology was already capable of is one of the reasons I don't take such arguments seriously.

Let’s put it this way:
Given how terrible the job market is on top of everything else, do we really want to risk adding AI on top of all the things that could/can go wrong?
Unfortunately for this line of thought, AI continuing to be developed is not something we have much choice on.

In the first place, there is no wide-spread move against AI. You wouldn't be able to stop this through a political movement any time in the near future, and as AI continues to fail to live up to apocalyptic predictions, that trend will continue.

In the second place, even if you did somehow have a large-scale political movement magically appear out of nowhere, let's say 75% of Americans get hardcore behind stopping AI development, that still wouldn't meaningfully be able to stop it within the US. It's something that anyone moderately rich with a few server clusters can continue development of, though it'd take someone seriously rich to continue it at the same pace it is currently moving at. Trying to enforce a ban would be absurdly difficult.

In the third place, that's just trying to stop it in the US. You'd need another magical appearance of anti-AI movement in every western country to stop it in all of them, but then it''d just continue in dictatorships, who would be happy to keep working on such things to try to gain an advantage over the West.


In short, stopping such things isn't going to happen. It's a good thing all the hysterical predictions are wrong, because if they were correct, we'd be fairly screwed, since this is a train that isn't going to be stopped by anything short of a catastrophe that crashes the internet semi-permanently.
 
AI is not magic. It's a mathematical process. It does not have unlimited growth, it is not capable of learning in the same way as humans. No matter how much current-architecture AI is improved, it will never be able to do many things, because that is literally baked into what it is.
Human brain and human thought are also mathematical processes. The only things AI cannot entirely simulate is art and emotions, because it doesn't have hormones.
You can keep saying 'but what will it be in the future?' But this is basically a paean to science fiction, with no actual substance or meaning. Until you can actually parse reality from fictional dreams, you're not going to be able to engage with this issue in a constructive way.

The 'AI runs a shop badly' isn't just about 'it does it badly,' but why it does it badly. The AI doesn't just make mistakes, it fundamentally fails to understand what it is supposed to do, and starts posting hallucinatory delusional messages. The reason for this is because LLM AI has no understanding of value, or meaning, literally all that it does and can do is recognize patterns, then regurgitate things it sees in patterns back to you. It has no idea what any of those patterns mean.
Unfortunately, I am not old enough that 20 years from now is science fiction for me. And while you are correct about the current AI not being capable of doing a lot of things, perception of reality is often more important than the reality itself.
Further, you're wrong about the idea of AI running a shop being bloody ridiculous two years ago. Someone could have easily written a dedicated program to run a shop twenty years ago, and it would have done it better. Rather than trying to use the LLM-based 'reasoning' as a one size fits all solution, a coder would have defined a set number of things the program could and could not do, write the code to enable it to interact with vendors, set prices, make and receive payments, etc, and then set basic parameters so that it was not selling products at a loss.

It would not be 'full service automation,' you'd need someone to check in on it every once in a while, and code in new solutions to problems not initially thought of, but it absolutely could have been done. The basic code capability to do such things existed in the 90's, but the internet infrastructure needed wouldn't have.

To make this clearer to you, not only has technology to do something like this existed since long before the attention-grabbing modern LLM AI existed, other models to do it on were better, they just don't have the hype that people have put around modern AI, you just didn't know that it could be done, because it didn't have all the hype.
Yes, I am aware of that. Artificial intelligence capable of some degree of learning had existed for decades already, and AI as a concept for far longer. But all of that was basically "hardcoded" - they could do exactly what they had been programmed for and no more.
 
It's rare to have an actual new poster on this forum, and it's clear you're not familiar with any of my ideology or positions because of that. That in and of itself is fine, but you're also putting words in my mouth, though at least not being rude about it, so I'll cut you some slack.

1. I am not a marxist of any stripe. I'm about as far from a marxist as is possible without going into lolbertarian territory.

2. I do not believe that AI is going to result in something like 93% unemployment. Current paradigms of AI I think will, at most, replace something like 30% of jobs, and I think 10% is much more likely. The sort of thing an economy could certainly experience growing pains from shifting around, but not the catastrophic oligarchification so many people are predicting.

3. You putting words in my mouth is you assuming I believe these catastrophized predictions of what AI will do, and am okay with this catastrophized outcome. I'm not. I don't think it's going to happen, and based on actually having a pretty good idea how the base methodology of the AI works and how history has worked with new tools in the past, I'm very confident that my prediction of what AI will do is more accurate.

4. I did not include charity in my list of 'ways people have unearned money' because I was thinking in terms of 'replacement for earning a living,' and charity is not intentionally given in quantities sufficient to 'make a living' off of. Some grifters actually make pretty big income this way, but that's not ethical, and if the people they're getting money from knew it, they'd not give them so much, or possibly anything at all.

5. I'm as prone to getting sucked back into arguments as any chronic debate forum participant; somebody engaging in at least a partially new argument or thread of discussion is certainly something that can do that. And I have to say, I think this is one of the first times anyone has ever accused me of being a marxist, of all things.


Also, another tidbit on how AI is vastly less capable than catastrophizers think:


Hmm, well I don't intend to put words in your mouth but let me see if I can get a better understanding. I feel like some of your positions are contradictory.

You feel that UBI would be disastrous because everybody will stop working. This is an extraordinary claim that I wouldn't mind seeing your proof of, as it flies in the face of both common sense and the observable behavior of humans worldwide. Are you taking the result from a UBI experiment or similar test case?

Why wouldn't the people controlling the AI also stop working once they have enough money to live on? Are you perhaps leaning on Lay's Law? Or do you believe there is an innate superiority to men like Michael Bloomberg and Bill Gates, who continue to work long after they enough income to live off?
 
Hmm, well I don't intend to put words in your mouth but let me see if I can get a better understanding. I feel like some of your positions are contradictory.

You feel that UBI would be disastrous because everybody will stop working. This is an extraordinary claim that I wouldn't mind seeing your proof of, as it flies in the face of both common sense and the observable behavior of humans worldwide. Are you taking the result from a UBI experiment or similar test case?

Why wouldn't the people controlling the AI also stop working once they have enough money to live on? Are you perhaps leaning on Lay's Law? Or do you believe there is an innate superiority to men like Michael Bloomberg and Bill Gates, who continue to work long after they enough income to live off?
This gets into a lot of things influenced by multiple factor. I'll try to break it down into individual factors to keep it easy to comprehend.

1. If you pay people to do something, they're more likely to do that thing. If you don't pay them to do something, they're less likely to do that. This is true when the something is 'not working.' So, if you pay people not to work, they're a lot less likely to work.

2. Different people have different priorities. Most people have what is called a 'high time preference,' or worded more directly, 'short term satisfaction over long term benefit.' They'd rather have five dollars now than ten dollars tomorrow, if given the choice. Most people would choose cheap subsidized housing, food stamps, etc, with cheap entertainment, than working hard for a meaningfully higher standard of living. It's also easier to bitch about how unfair it is that some people have more stuff than them, and vote for politicians who promise them more free stuff, than it is to actually work harder and earn a better standard of living.

3. Some people value a sense of accomplishment more than easy living. Some people value being high status more than easy living. Some people want both of these things more. Some people are obsessive about their particular field of work/study/interest, and if they have the opportunity will push the boundaries in it for shit pay, much less for increasingly good pay. These sorts of personalities are much less common than those who'd prefer to laze about in an easy life-style, but are the sort who work 80 hour weeks at law firms, obsess over trying to make a scientific or engineering breakthrough, etc. They're also the sort of people who will keep working 80 hour weeks when they've made enough money to live comfortably for the rest of their lives, and then some.

4. Some people desire power, specifically power over others, more than literally anything else. These are the ones who you really have to watch out for, and take the world all kinds of destructive places. The sort who will happily implement a UBI if it buys them the votes to perpetually stay in power, even as they do all kinds of horrific and destructive things with that power.


You speak of 'observable behavior of humans worldwide' and 'common sense.' Observable behavior of humans is that creating a welfare state increases crime rates and government dependency, especially if there are few or no limits on what conditions one is permitted onto the dole.

In the 1990's, after gaining control of the legislature for the first time in decades, the Republicans forced through a welfare reform that required people on social security actually be trying to get jobs, rather than just staying there indefinitely. I forget the exact figures, but between the time the bill was passed and when it actually went into effect, something like a third of people on welfare long term went out and got jobs.

They had always been able, they just had lacked the impetus to do so.


It isn't in middle class communities with high employment rates that you see high violent crime rates. It isn't in wealthy neighborhoods with high employment that you see high crime rates. It's in welfare slums, where large numbers of people are unemployed and living off the dole. Being welfare dependents isn't even the primary factor; single parenthood is the most immediately linked one, but that also ties into the distortive effect that welfare programs have on marriage and divorce rates.


Bottom line, 'common sense' will tell you that if you pay people to do nothing, you'll get a lot more people doing nothing. History shows us that if you normalize 'the state owes me a comfortable living without me lifting a finger,' your national politics will be dominated by the political conflict between tax-payers and tax-absorbers thereafter. This already happens to a large degree in almost all western countries.


Further, from an economic perspective, welfare/UBI/any similar thing with a different name is the ultimate corroder. You are paying people not to contribute. You are punishing people for contributing by taking their stuff and giving it to people who are not contributing. This is one of the most effective ways to crush an economy, and is a key part of why communism always fails.

As has been my argument since the start of this thread, implementing UBI would create exactly the kind of catastrophe that people hysterical about AI claim to be trying to avert.
 
As has been my argument since the start of this thread, implementing UBI would create exactly the kind of catastrophe that people hysterical about AI claim to be trying to avert.
Good enough AI would make UBI basically unavoidable, however.

Either that, or let 99% of people starve to death.
 
Good enough AI would make UBI basically unavoidable, however.

Either that, or let 99% of people starve to death.
No, those are not the only two options.
You can, for example, have unlimited govt employement. primarily in military or research.
You can finally get around to building proper space infrastructure and expand to space. building artificial spinning habitats.
the main job people will do is supervising robots.
 
You can, for example, have unlimited govt employement. primarily in military or research.
You do not want to do this, at least not without massive reforms.

What'll happen is that people who're in the system will make position descriptions that are biased towards their friends/families, so unqualified people will get in, and stay in. The federal government system makes it stupid hard to fire people once they're government employees, and the federal contractor system seems like its own massive grift.

That whole pipeline needs to be purged and rebuilt before you can even consider something like that.
 
You do not want to do this, at least not without massive reforms.

What'll happen is that people who're in the system will make position descriptions that are biased towards their friends/families, so unqualified people will get in, and stay in. The federal government system makes it stupid hard to fire people once they're government employees, and the federal contractor system seems like its own massive grift.

That whole pipeline needs to be purged and rebuilt before you can even consider something like that.
... bruh did you read what you replied to?
how did you get from
> govt employees 100% of people because robots make everything
to
> govt will only hire relatives of govt

and I even literally explained that the govt hiring would primarily be military.
 
Good enough AI would make UBI basically unavoidable, however.

Either that, or let 99% of people starve to death.
>good enough
The keyword. Without a whole new revolution in AI tech, current kind most likely will not be, diminishing returns already hit it hard.

No, those are not the only two options.
You can, for example, have unlimited govt employement. primarily in military or research.
You can finally get around to building proper space infrastructure and expand to space. building artificial spinning habitats.
the main job people will do is supervising robots.
Most of the people with any use in military or research would not have to worry about getting completely pushed out of the market by AI anytime soon anyway.
 
how did you get from
> govt employees 100% of people because robots make everything
to
> govt will only hire relatives of govt
Because that's what's happening in government hiring now. Entire swathes of their civilian hiring is nothing but nepotism hires.
 
Most of the people with any use in military or research would not have to worry about getting completely pushed out of the market by AI anytime soon anyway.
you have it backwards.
military can always accept more people and can accept literally everyone. so long as it has a sufficient budget of resources allocated to it.

if you manage to get a futuristic style AI that replaces almost all jobs.
then hiring 99% of the pop to be soldiers and 1% to be researchers makes sense.
Because that's what's happening in government hiring now. Entire swathes of their civilian hiring is nothing but nepotism hires.
> literally uses the term civilian hiring, to distinguish from military recruitment
> pretends military recruitment is the same as said civilian hiring

This is bad faith argumentation.
>good enough
The keyword. Without a whole new revolution in AI tech, current kind most likely will not be, diminishing returns already hit it hard.
the argument was explicitly about a scenario where 90% to 99% of jobs are replaced by AI. This is explicitly talking about NON reasoning AI.
You are conflating it with a futuretech where 100% of jobs are replaced by self reasoning AI. We all know that LLMs are not actual people.
 
you have it backwards.
military can always accept more people and can accept literally everyone. so long as it has a sufficient budget of resources allocated to it.
Then it's no longer a military, but a glorified welfare program with guns.
if you manage to get a futuristic style AI that replaces almost all jobs.
then hiring 99% of the pop to be soldiers and 1% to be researchers makes sense.
No it does not unless you like losing wars and money trying to train people on military equipment who are not suitable to understand it. In that unlikely scenario you may well call it public service corps, civil defense program or something.
 
Maybe, and here's a novel idea, we should make it so that people's whole identity and right to life isn't defined by something as arbitrary as "having a career". For a lot of people in modern society, having a full-time job is like being paid to be in fucking prison. We should be intentionally plowing into whatever technologies we possibly can to make working itself obsolete.

"But, what will people do?"

Sit back and enjoy machine-generated abundance. What else will they do? We will have officially won the game.

Motherfucker, I am tired of this Protestant work ethic bullshit where people look at the means to achieve post-scarcity and then suddenly feel paralyzed with fear at the idea that someone, somewhere, might be relieved from toil by it, like it's a moral necessity for people to work.

My father has no work-life balance and often works several days of overtime during his off weeks, well into his goddamn sixties, after his employer fucking poisoned him with a mandatory COVID-19 vaccination that gave him fucking diabetes. Work, for him, is a kind of inescapable mania. A slow suicide, where I am forced to watch a loved one disintegrate before my very fucking eyes.

Work is not inherently noble. Unless your job is also your hobby, every minute spent laboring is a minute not spent with family, or vacationing, or studying, or doing what you actually want to do with your life. If AI can liberate us from this, then we must pursue it with the utmost fervor.
 
Maybe, and here's a novel idea, we should make it so that people's whole identity and right to life isn't defined by something as arbitrary as "having a career". For a lot of people in modern society, having a full-time job is like being paid to be in fucking prison. We should be intentionally plowing into whatever technologies we possibly can to make working itself obsolete.

"But, what will people do?"

Sit back and enjoy machine-generated abundance. What else will they do? We will have officially won the game.

Motherfucker, I am tired of this Protestant work ethic bullshit where people look at the means to achieve post-scarcity and then suddenly feel paralyzed with fear at the idea that someone, somewhere, might be relieved from toil by it, like it's a moral necessity for people to work.
>achieve post scarcity

Sorry, not in this century for sure, possibly not this millennium. This is reality, nothing to do with any work ethic bullshit.

My father has no work-life balance and often works several days of overtime during his off weeks, well into his goddamn sixties, after his employer fucking poisoned him with a mandatory COVID-19 vaccination that gave him fucking diabetes. Work, for him, is a kind of inescapable mania. A slow suicide, where I am forced to watch a loved one disintegrate before my very fucking eyes.

Work is not inherently noble. Unless your job is also your hobby, every minute spent laboring is a minute not spent with family, or vacationing, or studying, or doing what you actually want to do with your life. If AI can liberate us from this, then we must pursue it with the utmost fervor.
Even in the unlikely scenario that in few decades we get benevolent Skynet, that still would not be classified as post-scarcity (more like low scarcity), as resources and real estate would still be limited, and so would be all goods and services not done by machines or made from any sort of rare materials. That would revolutionize the pricing of many things, so, for example, a few hours of fiverr equivalent work (though limited to stuff AI can't do) would get you month's groceries, but OTOH if you would want a few hours of a human doctor's or chef's time that would cost you a relatively large amount of money.
 
Maybe, and here's a novel idea, we should make it so that people's whole identity and right to life isn't defined by something as arbitrary as "having a career". For a lot of people in modern society, having a full-time job is like being paid to be in fucking prison. We should be intentionally plowing into whatever technologies we possibly can to make working itself obsolete.

"But, what will people do?"

Sit back and enjoy machine-generated abundance. What else will they do? We will have officially won the game.

Motherfucker, I am tired of this Protestant work ethic bullshit where people look at the means to achieve post-scarcity and then suddenly feel paralyzed with fear at the idea that someone, somewhere, might be relieved from toil by it, like it's a moral necessity for people to work.

My father has no work-life balance and often works several days of overtime during his off weeks, well into his goddamn sixties, after his employer fucking poisoned him with a mandatory COVID-19 vaccination that gave him fucking diabetes. Work, for him, is a kind of inescapable mania. A slow suicide, where I am forced to watch a loved one disintegrate before my very fucking eyes.

Work is not inherently noble. Unless your job is also your hobby, every minute spent laboring is a minute not spent with family, or vacationing, or studying, or doing what you actually want to do with your life. If AI can liberate us from this, then we must pursue it with the utmost fervor.
There is a difference between working hard for a living and taking pride in it and being worked to death like a slave.
Except slaves in the South were better treated. People in the USA in particular work very hard for peanuts.
 
This gets into a lot of things influenced by multiple factor. I'll try to break it down into individual factors to keep it easy to comprehend.

1. If you pay people to do something, they're more likely to do that thing. If you don't pay them to do something, they're less likely to do that. This is true when the something is 'not working.' So, if you pay people not to work, they're a lot less likely to work.

2. Different people have different priorities. Most people have what is called a 'high time preference,' or worded more directly, 'short term satisfaction over long term benefit.' They'd rather have five dollars now than ten dollars tomorrow, if given the choice. Most people would choose cheap subsidized housing, food stamps, etc, with cheap entertainment, than working hard for a meaningfully higher standard of living. It's also easier to bitch about how unfair it is that some people have more stuff than them, and vote for politicians who promise them more free stuff, than it is to actually work harder and earn a better standard of living.

3. Some people value a sense of accomplishment more than easy living. Some people value being high status more than easy living. Some people want both of these things more. Some people are obsessive about their particular field of work/study/interest, and if they have the opportunity will push the boundaries in it for shit pay, much less for increasingly good pay. These sorts of personalities are much less common than those who'd prefer to laze about in an easy life-style, but are the sort who work 80 hour weeks at law firms, obsess over trying to make a scientific or engineering breakthrough, etc. They're also the sort of people who will keep working 80 hour weeks when they've made enough money to live comfortably for the rest of their lives, and then some.

4. Some people desire power, specifically power over others, more than literally anything else. These are the ones who you really have to watch out for, and take the world all kinds of destructive places. The sort who will happily implement a UBI if it buys them the votes to perpetually stay in power, even as they do all kinds of horrific and destructive things with that power.


You speak of 'observable behavior of humans worldwide' and 'common sense.' Observable behavior of humans is that creating a welfare state increases crime rates and government dependency, especially if there are few or no limits on what conditions one is permitted onto the dole.

In the 1990's, after gaining control of the legislature for the first time in decades, the Republicans forced through a welfare reform that required people on social security actually be trying to get jobs, rather than just staying there indefinitely. I forget the exact figures, but between the time the bill was passed and when it actually went into effect, something like a third of people on welfare long term went out and got jobs.

They had always been able, they just had lacked the impetus to do so.


It isn't in middle class communities with high employment rates that you see high violent crime rates. It isn't in wealthy neighborhoods with high employment that you see high crime rates. It's in welfare slums, where large numbers of people are unemployed and living off the dole. Being welfare dependents isn't even the primary factor; single parenthood is the most immediately linked one, but that also ties into the distortive effect that welfare programs have on marriage and divorce rates.


Bottom line, 'common sense' will tell you that if you pay people to do nothing, you'll get a lot more people doing nothing. History shows us that if you normalize 'the state owes me a comfortable living without me lifting a finger,' your national politics will be dominated by the political conflict between tax-payers and tax-absorbers thereafter. This already happens to a large degree in almost all western countries.


Further, from an economic perspective, welfare/UBI/any similar thing with a different name is the ultimate corroder. You are paying people not to contribute. You are punishing people for contributing by taking their stuff and giving it to people who are not contributing. This is one of the most effective ways to crush an economy, and is a key part of why communism always fails.

As has been my argument since the start of this thread, implementing UBI would create exactly the kind of catastrophe that people hysterical about AI claim to be trying to avert.
Well, my curiosity is sated as to where your thoughts are coming from.

This is going to be a bitter pill to swallow and I'm not sure you're up to facing it. You've been lied to, and you're basing your worldview on untruths. My current suspicion is that you take in an unhealthy amount of clickbait and ragebait and possibly prosperity gospel.

Everything you said was not only false but literally the exact opposite of the truth.


The drop (much less than 30%) in Welfare happened in late 1992/early 1993. The law you claimed caused it was passed four years later, in 1996. The trend of people getting off welfare ended as soon as PRWORA, the welfare reform you're talking about, was passed and welfare increased immediately after. I tend to pay a lot of attention to word choices, and I suspect whoever misled you very deliberately used the phrase "mid nineties" as you did, to obfuscate the real dates and hide that the effect came before what they claimed was the cause.

So you actually just proved the exact opposite of your claim. People will, as any person who observes reality and has common sense can attest, get off of public assistance and get jobs as soon as possible, you don't need any impetus, it's built into human nature. Further the law you think got people off welfare at best did nothing and at worst increased it.

We also see a much more impressive drop in welfare during the 80s, again not coinciding with any special legal impetus. During the dreary stagflation days of Jimmy Carter, people needed help. When the economy boomed under Reagan, they found jobs the moment they could and quit welfare. Things soured under the Bush Sr. administration but the economy picked up immediately when Clinton was elected and people got jobs as soon as they were available, again, with no impetus. The dotcom bust made the economy go south again as Clinton entered his second term, and despite the imagined impetus from the PRWORA people went back to welfare, because jobs weren't available. Just making sure jobs exist will solve the problem.

Your other statements on this thread are similarly untrue. For example you slandered lottery winners as usually blowing all their winnings. This is not only untrue, a look at jackpot winner behavior reinforces the truth about work.

As an aside, I asked if you were informed by UBI test cases, but you came back with Welfare. That's an apples to oranges comparison, Welfare is deliberately designed with "cliffs" that, should you get your wages above a threshold, cause you to lose more benefits than you gained in wages and makes a family worse off the more they work. This is not paying people not to work, it's punishing people if they ask for a raise or hustle to get ahead. It's difficult to imagine a more malicious design than your impetus. I think it's to make sure corporations like Walmart and Amazon have a steady supply of subsidized workers who are afraid to ask for a raise because they'll lose more than they gain. Walmart in particular profited hugely off of Clinton's presidency and the laws he pushed such as welfare reform turned them into the powerhouse they are today.

UBI has no such cliffs, so any comparison you make between the two is guaranteed to be erroneous. It is not paying people not to work, much less punishing them for working. As you made the positive claim, I'm interested to see what you find if you do real research and compare actual UBI test programs, I haven't looked at any myself, do tell us please.

But back to lottery winners. A jackpot is far closer to UBI than welfare so that is a better comparison. It is a steady income that has no cliffs and does not punish working and hustling. A lotto winner also has all the money they need for a more comfortable life, far more than any UBI proposal. Do they stop working?


No. Lotto winners keep working at a rate of 85.5%. This is actually higher than the average. This pattern has been confirmed in many studies across multiple countries. Two-thirds of them even keep working the same job. As for bankruptcy? I think you probably got that off yet another ragebait anecdote site. Lottery winners declare bankruptcy about 1 in 3 times, a bit up from the average of 1 in 5. But the average millionaire in the US declares bankruptcy 3.5 times (this average is pulled up by some of them doing many serial bankruptcies, the mean is closer to lotto winners). Bankruptcy at the millionaire and above level is not a result of blowing all their cash, it's a common method for moving around assets and evading creditors.

So if millionaire lottery winners keep working, why would UBI recipients stop? Again, this is just common sense and basic observation of human behavior.

I will note from reading this thread in it's entirety and looking at your other posts, you appear to be a spiritual man. So I will give you one last point drawn from the bible itself with Jesus' own words at John 5:17.

But he [Jesus] answered them: "My Father has kept working until now, and I keep working."

Man is made in the image of God and God keeps continually working despite owning the entire universe. Denying this spiritual truth verges on blasphemy.
 

Users who are viewing this thread

Back
Top