On ML in Game Dev
Machine learning should not replace human visual artists (or writers and voice actors), but instead can be a tool used in their work.
The specific example I used in the video talked about using Stable Diffusion to build game textures.
For example, in Fallout New Vegas a handful of world objects have a “clean” and a “dirty” variation while nearly everything else only has a single texture*.
It takes time and money to make a new texture. From AAA to indie, studios lack the budget and time to make like 90 different subtle variations of ignorable world objects. By integrating diffusion techniques into the texture workflow an artist can train a model using a small batch of their own stylized textures to make a ton of textures. Something like Stable Diffusion can become a tool for artists to multiply their work, rather than replacing them - though I am sure studios will try to do this and fail. The core idea is not to generate novel textures out of thin air, but to generate many slightly-varied textures quickly.
“But what about file size?” ask the conscientious dev and the storage space/bandwidth cap concerned gamer.
Well, depending on how efficient diffusion gets (or how much better consumer NVidia cards get) textures could be rendered nearly on the fly. Even on my system lower res images can be rendered almost instantaneously with the right settings.
This means that each unique car driving by in GTA VI could have slightly different scratches on it, or each time the player walks into a store the wrinkling of the chip bags looks a little different, or while walking around in a desert the sand changes over time. And, by saving a couple of bytes worth of generation metadata for a short time, if a character walks out of the store and then right back in the textures can stay exactly the same. A world can feel like it is subtly but genuinely changing over time - while still needing a human artist to draw a few good chip bags in the first place to train the model.
Oh, and it can be a setting that is a fun next-gen NVidia GPU feature. Older systems or systems without CUDA (did they ever get CUDA emulation to work?) can just have standard textures - again necessitating good human artists.
This idea really intrigues me, but my remote render PC suffered a drive failure right as I was starting to test it, hence why my next video will be an entire deep dive on the topic once I get back to the state where my GPU lives.
*I know it is an old game, but it is what I am familiar with so I am using it as my example.
Note to self - static logo as fixed point for img2img with semi transparency for blending