Skallagrim
Well-known member
We have a thread here that discusses (and complains about) films, series, games, franchises et cetera being ruined by "woke" nonsense (at the expense of things like characterisation and plot). I would add that besides the overly politicised tendencies in modern media, the overal tendency towards monolithic megacorps has ruined the media landscape. It results in a banal lack of talent, and an inability to create engaging works. Summarising very briefly: the people who make the financial decisions know nothing about creativity or story-telling, and the 'political activist' types they end up hiring to do the job are frankly even worse.
This thread isn't meant for complaints, however. It's meant for possible solutions -- especially pro-active ones. If the media-corps won't make good stuff... make it yourself. Outperform the bastards, and do it on a shoe-string budget. That's becoming ever more feasible, and it's the possibilities in such a direction that I'd like to discuss here. Specifically: advances in photorealistic 3D animation and machine learning and the potential the offer when it comes to independent original productions and fan films.
3D animation
Making something like a sci-fi web series or any kind of original sci-fi film on a shoe-string budget has long been a total nightmare. Most of the budget has always gone to special effects (digital or traditional), which has left many a creator with precious little money for anything else. So we typically get first year acting students at best, and the locations are A) the Southern California desert, B) a generic forest backdrop, or C) some warehouse or similar.
In recent years, however, the ability to generate digital backdrops has evolved at a rapid pace. So has the ability to create life-like 3D creatures: while not flawless, even an indie production can add the sort of aliens that Star Trek: TOS could only ever dream of. Human background characters can similarly be computer-generated with sufficient realism. Then there's space combat. You've got whole environment engines that can give you a solar system to play around in -- to your specifications, with full detail. Where 3D models of even the best studio sci-fi productions once looked fake, creating highly detailed and realistic models is now within the reach of any enthousiastic amateur.
With the advent (and continuous refinement) of stuff like Unreal Engine, Maya/Motionbuilder, Character Creator 3 and Blender, it's no wonder that there's a lot of amazing-looking stuff posted online these days. Short films that look as good as any studio production. I think most efforts remain stuck in a 'web-series short' format because making long-form series is still too expensive right now, and because original projects (unlike fan projects) almost invariably need original modelling. You have to hire people for that, or you have to do it yourself -- and that's a lot of effort. When we consider that you can now create 3D visuals on your home computer that are far superior to the best any major studio could offer twenty years ago, that offers perspectives for the future. Imagine how easy and how cheap it'll be to create realistic 3D graphics for a project a decade from now.
For fan films and similar projects, the horizon is closer still. In many ways, we're already there. Just look at what 3D artists like Ansel Hsiao (fractalsponge), EC Henry and Howard Day can create. Crucially, they use each other's models when that's convenient, and they have repeatedly allowed their use in fan film projects. This stuff literally looks better than anything in the official Star Wars films. (Hsiau's Star Destroyer model, for instance, is widely considered to be the gold standard: superior to the official LucasFilm one.)
Then there's stuff like what Cinematic Captures is putting out. Also all shorts, but note: all made within the last year. We're looking at the start of something here, not the culmination. Some examples: Order 66 (plus the making of), Not Alone (plus the making of), Shadow of the Republic (plus making of #1 and making of #2). Some behind the scenes looks/teasers at future projects here and here. Oh, and then there's the (WIP) remake of an unfinished Clone Wars scene in realistic style.
Machine learning
Of perhaps even greater relevance to the future of fan films is machine learning. Deepfakes are getting better by the day. DeepFaceLab and Faceswap allow for things in fan projects that have barely been explored. Again, there's plenty of examples on YouTube. Derpfakes has lots of joke stuff for a laugh, but when he puts Harrison Ford in Solo, it looks good. His deepfake revision of CGI Leia in Rogue One is pretty spectacular. Shamook likewise has a host of examples. Sticking to Star Wars, he's likewise put Harrison Ford in Solo, and rendered a much-improved version of Leia in Rogue One. Meanwhile, Stryder HD has, among many other things, done his best to improve Luke's appearance in The Mandalorian using deepfake. The Corridor Crew took another approach to that, and used machine learning to create their own version of a young Luke. Taking a detour from Star Wars: Futuring Machine deepfaked the TOS actors into some scenes of 2009 Trek.
Other ventures show how rapidly machine learning has advanced. There are projects like Deep Nostalgia, which derives animations from a single still photo. Or Time Travel Rephotography, which isn't just used to colourise black-and-white photos, but can also take photos from one person at different ages, and then "reconstruct" what that person's face would have looked like at an age in-between. (On YouTube, Deepcaked also experiments with de-aging via machine learning. Examples here, here and here.)
Deepfakes are typically used for faces, but neural networks can create whole-body fakes, too. Check this out. And that's from 2018. It's gotten more advanced since then.
Machine learning isn't limited to the purely visual, either. Voicefakes are equally possible. It's not perfect yet, but with a sufficiently large data-sets and multiple refinements of the results to get rid of little hitches, you can create highly convincing imitations of someone's voice. There are experimental projects that can create a pretty good result based on just one sentence in a person's voice. Examples out and about on the internety are typically created for memes, so not perfect. But they're still indicative of the potential. Here's Jordan Peterson reading a CopyPasta. And then we have Kylo Explains Star Wars and Kylo Explains The Empire Strikes Back.
Sight meets sound again when we train deepfakes to sync lip movement to any recorded text we choose. Yes, that's also possible.
Conclusions and expectations
A lot of the above is still very new. It's still being refined. But the potential is there. Imagine the state of things a decade from now. Imagine the inevitable increase in accessability and decrease in costs. And imagine all the things I've referenced being combined; jointly applied to a project. Suppose you're unhappy with Star Wars under Disney. Suppose you'd have preferred films based on the old Expanded Universe. That wasn't going to happen anyway, because the original actors were significantly too old by 2015. But that's quickly becoming a non-factor. Right now, if you want to see the EU on screen, you only get a fan-made 3D animation of Heir to the Empire and a 2D animation of Dark Empire. While admirable, advances in technology can allow for so much more.
Imagine it, ten years from now or so. I think it'll be possible for a dedicated group of people to create a high-detail, realistic 3D animated series ultimately covering the entire EU. You'd only need a few actors (for motion capture) and voice actors (to record all the lines). Everything else could be done with 3D animation and machine learning.
You could create a highly detailed and accurate-to-life 3D model of (say) Mark Hamill. Have that model perform the motion-captured movement of your actor. Train a neural network based on Hamill's body movements, and apply that to the movement of the 3D model, so that it genuinely performs the chosen movements just like Hamill would. Craft any digital environment, or film the actor in a physical environment of your liking, and edit the model into the scene. (If need be, digitally add elements to the real environment. This is no longer difficult.) Train a highly complex deepfake of Mark Hamill's face, and put that on the 3D model that is already made to resemble him. Have a voice actor record his lines, with most attention going to correctly imitating the cadence of it. Train a neural network to render us any sentence in Hamill's voice, and apply that to the recorded dialogue. The face, the voice and the body movements can all be 'aged up' and 'aged down' as needed. So we can have a 24-year-old Luke if we're creating a film version of Truce at Bakura, a 29-year-old Luke for the Thrawn trilogy, and a 39-year-old Luke for the Hand of Thrawn dualogy, et cetera.
As far as the space battles go: 3D models superior to the official ones already exist. They'll only become more widely available as time passes.
So, at that point, what's stopping a group of committed fans from just literally putting the entire EU on the screen? Once the technologies discussed here are mature -- which they should be, a decade from now -- it's safe to assume that a feature-length film can be created in this way for a sum counted in the thousands of dollars. Perhaps tens of thousands. A few years back, that kind of money got you a 20-minute fan film set in a generic forest! And costs will continue to drop over time. (Keep in mind: fan projects share assets even now. Once a top-grade '3D Luke model' or well-trained 'Luke deepfake' are ready, they'll be shared. To be used and re-used in dozens of fan projects. Down the line, most of the hard work will have been done -- gradually, step-by-step -- by the pioneers. The masses will follow.)
When we reach that stage... who the fuck needs Disney? Who needs Hollywood at all? In the end, we'll do it ourselves. And we'll do it better.
This thread isn't meant for complaints, however. It's meant for possible solutions -- especially pro-active ones. If the media-corps won't make good stuff... make it yourself. Outperform the bastards, and do it on a shoe-string budget. That's becoming ever more feasible, and it's the possibilities in such a direction that I'd like to discuss here. Specifically: advances in photorealistic 3D animation and machine learning and the potential the offer when it comes to independent original productions and fan films.
3D animation
Making something like a sci-fi web series or any kind of original sci-fi film on a shoe-string budget has long been a total nightmare. Most of the budget has always gone to special effects (digital or traditional), which has left many a creator with precious little money for anything else. So we typically get first year acting students at best, and the locations are A) the Southern California desert, B) a generic forest backdrop, or C) some warehouse or similar.
In recent years, however, the ability to generate digital backdrops has evolved at a rapid pace. So has the ability to create life-like 3D creatures: while not flawless, even an indie production can add the sort of aliens that Star Trek: TOS could only ever dream of. Human background characters can similarly be computer-generated with sufficient realism. Then there's space combat. You've got whole environment engines that can give you a solar system to play around in -- to your specifications, with full detail. Where 3D models of even the best studio sci-fi productions once looked fake, creating highly detailed and realistic models is now within the reach of any enthousiastic amateur.
With the advent (and continuous refinement) of stuff like Unreal Engine, Maya/Motionbuilder, Character Creator 3 and Blender, it's no wonder that there's a lot of amazing-looking stuff posted online these days. Short films that look as good as any studio production. I think most efforts remain stuck in a 'web-series short' format because making long-form series is still too expensive right now, and because original projects (unlike fan projects) almost invariably need original modelling. You have to hire people for that, or you have to do it yourself -- and that's a lot of effort. When we consider that you can now create 3D visuals on your home computer that are far superior to the best any major studio could offer twenty years ago, that offers perspectives for the future. Imagine how easy and how cheap it'll be to create realistic 3D graphics for a project a decade from now.
For fan films and similar projects, the horizon is closer still. In many ways, we're already there. Just look at what 3D artists like Ansel Hsiao (fractalsponge), EC Henry and Howard Day can create. Crucially, they use each other's models when that's convenient, and they have repeatedly allowed their use in fan film projects. This stuff literally looks better than anything in the official Star Wars films. (Hsiau's Star Destroyer model, for instance, is widely considered to be the gold standard: superior to the official LucasFilm one.)
Then there's stuff like what Cinematic Captures is putting out. Also all shorts, but note: all made within the last year. We're looking at the start of something here, not the culmination. Some examples: Order 66 (plus the making of), Not Alone (plus the making of), Shadow of the Republic (plus making of #1 and making of #2). Some behind the scenes looks/teasers at future projects here and here. Oh, and then there's the (WIP) remake of an unfinished Clone Wars scene in realistic style.
Machine learning
Of perhaps even greater relevance to the future of fan films is machine learning. Deepfakes are getting better by the day. DeepFaceLab and Faceswap allow for things in fan projects that have barely been explored. Again, there's plenty of examples on YouTube. Derpfakes has lots of joke stuff for a laugh, but when he puts Harrison Ford in Solo, it looks good. His deepfake revision of CGI Leia in Rogue One is pretty spectacular. Shamook likewise has a host of examples. Sticking to Star Wars, he's likewise put Harrison Ford in Solo, and rendered a much-improved version of Leia in Rogue One. Meanwhile, Stryder HD has, among many other things, done his best to improve Luke's appearance in The Mandalorian using deepfake. The Corridor Crew took another approach to that, and used machine learning to create their own version of a young Luke. Taking a detour from Star Wars: Futuring Machine deepfaked the TOS actors into some scenes of 2009 Trek.
Other ventures show how rapidly machine learning has advanced. There are projects like Deep Nostalgia, which derives animations from a single still photo. Or Time Travel Rephotography, which isn't just used to colourise black-and-white photos, but can also take photos from one person at different ages, and then "reconstruct" what that person's face would have looked like at an age in-between. (On YouTube, Deepcaked also experiments with de-aging via machine learning. Examples here, here and here.)
Deepfakes are typically used for faces, but neural networks can create whole-body fakes, too. Check this out. And that's from 2018. It's gotten more advanced since then.
Machine learning isn't limited to the purely visual, either. Voicefakes are equally possible. It's not perfect yet, but with a sufficiently large data-sets and multiple refinements of the results to get rid of little hitches, you can create highly convincing imitations of someone's voice. There are experimental projects that can create a pretty good result based on just one sentence in a person's voice. Examples out and about on the internety are typically created for memes, so not perfect. But they're still indicative of the potential. Here's Jordan Peterson reading a CopyPasta. And then we have Kylo Explains Star Wars and Kylo Explains The Empire Strikes Back.
Sight meets sound again when we train deepfakes to sync lip movement to any recorded text we choose. Yes, that's also possible.
Conclusions and expectations
A lot of the above is still very new. It's still being refined. But the potential is there. Imagine the state of things a decade from now. Imagine the inevitable increase in accessability and decrease in costs. And imagine all the things I've referenced being combined; jointly applied to a project. Suppose you're unhappy with Star Wars under Disney. Suppose you'd have preferred films based on the old Expanded Universe. That wasn't going to happen anyway, because the original actors were significantly too old by 2015. But that's quickly becoming a non-factor. Right now, if you want to see the EU on screen, you only get a fan-made 3D animation of Heir to the Empire and a 2D animation of Dark Empire. While admirable, advances in technology can allow for so much more.
Imagine it, ten years from now or so. I think it'll be possible for a dedicated group of people to create a high-detail, realistic 3D animated series ultimately covering the entire EU. You'd only need a few actors (for motion capture) and voice actors (to record all the lines). Everything else could be done with 3D animation and machine learning.
You could create a highly detailed and accurate-to-life 3D model of (say) Mark Hamill. Have that model perform the motion-captured movement of your actor. Train a neural network based on Hamill's body movements, and apply that to the movement of the 3D model, so that it genuinely performs the chosen movements just like Hamill would. Craft any digital environment, or film the actor in a physical environment of your liking, and edit the model into the scene. (If need be, digitally add elements to the real environment. This is no longer difficult.) Train a highly complex deepfake of Mark Hamill's face, and put that on the 3D model that is already made to resemble him. Have a voice actor record his lines, with most attention going to correctly imitating the cadence of it. Train a neural network to render us any sentence in Hamill's voice, and apply that to the recorded dialogue. The face, the voice and the body movements can all be 'aged up' and 'aged down' as needed. So we can have a 24-year-old Luke if we're creating a film version of Truce at Bakura, a 29-year-old Luke for the Thrawn trilogy, and a 39-year-old Luke for the Hand of Thrawn dualogy, et cetera.
As far as the space battles go: 3D models superior to the official ones already exist. They'll only become more widely available as time passes.
So, at that point, what's stopping a group of committed fans from just literally putting the entire EU on the screen? Once the technologies discussed here are mature -- which they should be, a decade from now -- it's safe to assume that a feature-length film can be created in this way for a sum counted in the thousands of dollars. Perhaps tens of thousands. A few years back, that kind of money got you a 20-minute fan film set in a generic forest! And costs will continue to drop over time. (Keep in mind: fan projects share assets even now. Once a top-grade '3D Luke model' or well-trained 'Luke deepfake' are ready, they'll be shared. To be used and re-used in dozens of fan projects. Down the line, most of the hard work will have been done -- gradually, step-by-step -- by the pioneers. The masses will follow.)
When we reach that stage... who the fuck needs Disney? Who needs Hollywood at all? In the end, we'll do it ourselves. And we'll do it better.