AI/Automation Megathread

Agent23

Ни шагу назад!
Thus far it has proven thet it can:
Plagiarize.
Make shit up.
Misunderstand simple things.
Has no common sense.
Can stretch out and rephrase a few sentences into multiple pages of academically-sounding gibberish.

So it can replace much of academia and general education, e.g. remove stuff that garners no value to begin with.
 

LordsFire

Internet Wizard
It is improving noticeably though; which means that, so long as the technology continues to do so, it's possible that it will eventually get to the point where it becomes indistinguishable from the real thing.
What I think it will certainly end up doing is producing a 'floor.' Give the bot a few million cycles, and it'll put out a generic piece of media for you.

If human creators can't make something better than that, they won't be able to make a profit anymore.

Past that, who knows?
 

Typhonis

Well-known member
How long till the creators beg the government for help? "Please protect our jobs from the soulless machines."
 

Bear Ribs

Well-known member
What I think it will certainly end up doing is producing a 'floor.' Give the bot a few million cycles, and it'll put out a generic piece of media for you.

If human creators can't make something better than that, they won't be able to make a profit anymore.

Past that, who knows?
I'd tend to agree with this, it's how other automation systems have gone in the past. Except for the "generic" part as we're already well past that point. You don't want generic, you want Chuck Norris, or maybe Daisy Ridley's your waifu for some reason?


We got AI trained to put Chuck Norris' face on whatever body you want, Daisy Ridley in whatever position you please. That's how they did the Donald Trump arrest fakes. But what if it's not a character who's your waifu, it's a style? You say you love the 80s-90s early anime aesthetic?


AI's been trained to produce that exact style. Or maybe you're too young for the early anime and want clone wars?


Yeah, the whole idea that it produces generic art died a month or two ago. Further you can plug these processes together to create entirely new, but entirely consistent results, easily, to produce Daisy Ridley in the Clone Wars style or your own comic featuring Chuck Norris in a 90s anime style.

As for past that, the floor is going to keep rising. A few naysayers keep acting like what it can do this week is the limit and AI won't grow from here. Sometimes they insist there's some magical tech limits, often insisting AI will never be able to do things it's doing right now*. This just isn't true, in the space of two months we've gone from "Doesn't have enough continuity for static comic books" to "The animation is incredibly janky" to "I can still see some artifacts but it's getting smoother."

As the floor keeps rising, jobs for humans get ever more scarce and wages keep depressing.

*Mocking them will never get old for me.
 

Morphic Tide

Well-known member
This just isn't true, in the space of two months we've gone from "Doesn't have enough continuity for static comic books" to "The animation is incredibly janky" to "I can still see some artifacts but it's getting smoother."
That you cannot see these are just rephrasing the same complaint of it being shit due to instability is a pretty clear demonstration that you never understood the position I have in the first place. Because it's specifically that due to being giant fuzzy blobs of brute-force inferences and conjectures the current methodology is fundamentally incapable of having specific featural inaccuracies fine-tuned away. As such it requires non-trivial changes in methodology, whether to create hardware vaguely capable of supporting the straightforward but beyond-expected-curve solutions or in the foundations of the software to get a methodology that properly operates with specific features. Those methodology changes are inherently unpredictable.

Simple as that: By my understanding of the current method, what we are seeing cannot scale to the expected end result. It does not learn what "a hand" is as a discrete feature, it does not learn how perspective operates, it learns an abstract generalization that spits out approximations of these things. Essentially, I am arguing that we possess low-pressure tank-fed reciprocating steam engines, and the task requires high-pressure watertube turbines. It's the vague category of thing, but not actually a good enough kind of that thing.
 

Typhonis

Well-known member
So in other words we might get some cartoons that aren't cal arts trash soon?
Give it time. UNLESS some idiot decides they want Cal Arts style animation....hmmm Any actors face on any body or just putting them in totally?

Welp...good bye porn industry we hardly knew ye.
 

Bear Ribs

Well-known member
That you cannot see these are just rephrasing the same complaint of it being shit due to instability is a pretty clear demonstration that you never understood the position I have in the first place. Because it's specifically that due to being giant fuzzy blobs of brute-force inferences and conjectures the current methodology is fundamentally incapable of having specific featural inaccuracies fine-tuned away. As such it requires non-trivial changes in methodology, whether to create hardware vaguely capable of supporting the straightforward but beyond-expected-curve solutions or in the foundations of the software to get a methodology that properly operates with specific features. Those methodology changes are inherently unpredictable.

Simple as that: By my understanding of the current method, what we are seeing cannot scale to the expected end result. It does not learn what "a hand" is as a discrete feature, it does not learn how perspective operates, it learns an abstract generalization that spits out approximations of these things. Essentially, I am arguing that we possess low-pressure tank-fed reciprocating steam engines, and the task requires high-pressure watertube turbines. It's the vague category of thing, but not actually a good enough kind of that thing.
My dude, I do understand your claims. I just also know you are wrong. You literally claimed things we see historically, like commercial steam engines, are fundamentally impossible. You've told us hard limits on AI would prevent it from ever doing things we saw it doing before your posts. You have zero credibility due to having been clowned by AI doing things you keep insisting it's fundamentally impossible for it to do, repeatedly.
 

Morphic Tide

Well-known member
You literally claimed things we see historically, like commercial steam engines, are fundamentally impossible.
The example where a completely different power-generation technology is what made it economically viable? Exactly the argument I'm making? In which we do not know how long it will take, nor is it actually certain to be possible?
"Steam-powered personal vehicles never would have worked" =/= "steam engines would never work".

You've told us hard limits on AI would prevent it from ever doing things we saw it doing before your posts.
But because we physically can't brute-force it by throwing more transistors at it, it is uncertain. This is "Fusion is 10 years away!" for the last 50 years thinking, that because visible progress is being made it's obviously close to being sold to the end-user. Maybe it's 5 years, maybe it's 10, maybe the technical challenges will keep cropping up for the next 50 years like they have for fusion.
"Hard limits on blunt transistor packing" =/= "hard limits on all AI".

You have zero credibility due to having been clowned by AI doing things you keep insisting it's fundamentally impossible for it to do, repeatedly.
"This way of doing it cannot do thing" =/= "thing cannot be done".

You have consistently argued against statements that I never made, and have consistently denied your incomprehension when directly called out on how you are wrong. The entire time, my point has been "we do not actually know because the current systems are inappropriate".

It does not matter what you throw at 1960s fusion or 1860s steam engines, the former will never work for gridscale power and the latter will never work for personal vehicles because that methodology is inadequate. In the same fashion, 2010s AI methodology will never overthrow media creation.

Perhaps a new methodology will be cracked this decade, perhaps it will be stuck in "Coming Soon" hell for half a century, we do not know because it is contingent on a qualitative change that has not happened, and as such doomsaying about it is premature.
 

Terthna

Professional Lurker
Does AI have to reproduce things with 1:1 accuracy? Or is there a point where it becomes good enough that its flaws simply become eccentricities that differentiate it, but doesn't impede its ability to produce something enjoyable?
 

LordsFire

Internet Wizard
I think the really interesting thing that will come, is not 'just done by AI' media, but drastically reducing the number of people needed to make a large-scale project.

For example, getting good 'skeletal' animation, then having human animators put the 'skin' on the models in ways that don't go deep into uncanny valley. How many man-hours does that save?
 

Bear Ribs

Well-known member
"Steam-powered personal vehicles never would have worked" =/= "steam engines would never work".
Yes, but now you're getting pedantic because steam-powered vehicles did work and were a huge commercial success. Just as with AI you were so poorly informed of the technology you were arguing that you made claims that X was impossible when X had already been accomplished.

"Hard limits on blunt transistor packing" =/= "hard limits on all AI".


"This way of doing it cannot do thing" =/= "thing cannot be done".

You have consistently argued against statements that I never made, and have consistently denied your incomprehension when directly called out on how you are wrong. The entire time, my point has been "we do not actually know because the current systems are inappropriate".

It does not matter what you throw at 1960s fusion or 1860s steam engines, the former will never work for gridscale power and the latter will never work for personal vehicles because that methodology is inadequate. In the same fashion, 2010s AI methodology will never overthrow media creation.

Perhaps a new methodology will be cracked this decade, perhaps it will be stuck in "Coming Soon" hell for half a century, we do not know because it is contingent on a qualitative change that has not happened, and as such doomsaying about it is premature.
Yeah, steam cars would never work... oh wait. They were extremely successful and Jay Leno was once pulled over for driving his steam-powered car down the highway at 76mph.


You claimed in this thread that a static comic book was beyond AI even after movies had already been made. You insisted animation was impossible after we'd already posted animations. And rather than taking the L you've just moved the goalposts to "Well I can see artifacts in the movies" as if that's a legitimate criticism when even you yourself have admitted they've drastically decreased with only a month's advancement.
 

Agent23

Ни шагу назад!
I'd tend to agree with this, it's how other automation systems have gone in the past. Except for the "generic" part as we're already well past that point. You don't want generic, you want Chuck Norris, or maybe Daisy Ridley's your waifu for some reason?

I just see random photoshopped images of Chuck, in some his nose and body look very strange.

Is it just me, or I'd her head weirdly out of proportion with her body?

I'm any case, you are seeing the few good results of what must be hundreds of failures and dozens of days of computational power wasted.
Classic Survivorship bias - Wikipedia

A.I. fundamentally has no idea what it is doing, it recognizes a pattern where there is none, and does not have a frame of reference to tell it that pattern or parts of it are gibberish.

You are being overly exuberant here IMHO.
 
Last edited:

49ersfootball

Well-known member
A thread for tracking new developments in AI, the latest jobs lost to automation, and where robots and industry are headed.

For an opener, in AI research DeepMind has released AlphaCode, an AI capable of writing code on its own. While this may not sound amazing, codes that produce new codes have existed a while now, AlphaCode can read a multi-paragraph plain-English description of the problem, parse out what the goals are from the instructions, and write code that will accomplish each of those goals and solve the problems. This is a huge advance because fundamentally, AlphaCode can lead to managers simply cutting out programmers entirely since AlphaCode can understand conversational instructions and figure out solutions from them.


Across multiple highly competitive programming contests, AlphaCode scored in the top 54% so about average among highly skilled programmers. It's increasingly looking like "Learn to Code" is going to go the way of learning to shoe horses as a career option.



Intriguing discussion thread.
 

Bear Ribs

Well-known member
I just see random photoshopped images of Chuck, in some his nose and body look very strange.


Is it just me, or I'd her head weirdly out of proportion with her body?

I'm any case, you are seeing the few good results of what must be hundreds of failures and dozens of days of computational power wasted.
Classic Survivorship bias - Wikipedia

A.I. fundamentally has no idea what it is doing, it recognizes a pattern where there is none, and does not have a frame of reference to tell it that pattern or parts of it are gibberish.

You are being overly exuberant here IMHO.
And as we know, computers are known for being written in stone and not advancing in the slightest. That's why modern gaming rigs have exactly the same performance specs as an Atari 2600.

More seriously it's not the art that we're worried about. The Art is there because it very clearly shows how rapidly AI abilities are growing, as an example, not as the threat itself.

I've gone back through this thread and added Threadmarks to a number of things. What people are worried about are the gun-equipped AI Dogs, the robots that have already gotten MBAs and passed medical and bar exams, the entirely automated AI restaurants already operating that cook and sell food without a human, etc. It's the proven massive job losses and wage stagnations coupled the obvious indications that the super wealthy are closing in on loyal combat robots to kill any peasnts who get uppity and can lock down society in a way that's never happened before.
 

Agent23

Ни шагу назад!
And as we know, computers are known for being written in stone and not advancing in the slightest. That's why modern gaming rigs have exactly the same performance specs as an Atari 2600.

More seriously it's not the art that we're worried about. The Art is there because it very clearly shows how rapidly AI abilities are growing, as an example, not as the threat itself.

I've gone back through this thread and added Threadmarks to a number of things. What people are worried about are the gun-equipped AI Dogs, the robots that have already gotten MBAs and passed medical and bar exams, the entirely automated AI restaurants already operating that cook and sell food without a human, etc. It's the proven massive job losses and wage stagnations coupled the obvious indications that the super wealthy are closing in on loyal combat robots to kill any peasnts who get uppity and can lock down society in a way that's never happened before.
Actually hardware specs have been mostly stuck in place for the better part of a decade, there is only so much CPU cycles you can squeeze out of silicone.

Also, so you know what a Big O notation is?
What about Amdahl's law?

There are some algorithms that are so time and space inefficient that at one point even if you have 10x the computing capacity you will not get a meaningful increase in speed.
 

Users who are viewing this thread

Top