Jump to content

AI-Generated Game Textures and Concept Art




Midjourney is an AI art generator you can interact with on Discord to make content for your game engine. To use it, first join the Discord channel and enter one of the "newbie" rooms. To generate a new image, just type "/imagine" followed by the keywords you want to use. The more descriptive you are, the better. After a few moments four different images will be shown. You can upsample or create new variations of any of the images the algorithm creates.


And then the magic begins:


Here are some of the images I "created" in a few minutes using the tool:


I'm really surprised by the results. I didn't think it was possible for AI to demonstrate this level of spatial reasoning. You can clearly see that it has some kind of understanding of 3D perspective and lighting. Small errors like the misspelling of "Quake" as "Quke" only make it creepier, because it means the AI has a deep level of understanding and isn't just copying and pasting parts of images.

What do you think about AI-generated artwork? Do you have any of your own images you would like to show off? Let me know in the comments below.

  • Like 1


Recommended Comments

I've been using Dall-e. Similar concept, type in a description of what you want and it tries to generate that picture.  You can also specify what style you want. Oil painting ,3D graphic or pixel art etc. The misspelling  you are getting is deliberate. Its to stop people creating offensive texts  or ask for a  famous person to hold a sign saying "I am stupid" or worse.

Below is Dall-es' attempt at "UltraEngine" title graphic.




  • Like 1
Link to comment

The power of these programs is amazing. 

It really will change the way we create art.

Here is a Rabbit walking sprite sheet from Dall-e.


Normally this would take ages to create in some paint program, now you just type a one line sentence.

  • Like 1
Link to comment

More startlingly good images. If you could just train it on a set of game textures the output would be much better:


  • Like 1
Link to comment

Midjourney seem to make better artistic images then dalle.

I think is just trained with billions of images and has some kind of image recognition and classifications and is just copy pasting, deforming images in a clever way.Maybe im wrong.

Probably is also using input from the people generating the images to learn what good image means.


Anyway this is good news for indies by the look of it :)

I plan to use midjourney for some 2d game experiment.


old man sitting on a mountain near the sea with a maelstrom swirling in front of him , providence , cthulhu monster , H P Lovecraft world , the color from the sky



  • Like 2
Link to comment

I've been playing with these for a few months now ... these image generators will put a lot of people out of a job. Including me, to some extent.

anyway: Dall-e and midjourney gave me mixed results, and I got threatned to get banned for trying to get some gory textures. Stable Diffusion is the new kid in town, and it can be run locally, on colab, or on their website, and the code is open source, so you can just bypass the NSFW filter and get all the blood and guts you want. also boobs, if you are okay with them having three nipples, or no nipples, occasionally. Texture generation has also been disappointing, so far.

but then there's centipede diffusion, a predecessor of stable diffusion. runs on colab and creates a bunch of small images, quickly, and uses those then as input images for the diffusion, and finally does upscaling. here, you can actually get very good textures, - you'll have to make them tilable afterwards, and the process is very slow. But since it doesn't run on your GPU, but on one of Google's. It's also much easier to control the style - if you really want a specific quality in your image - it helps to know the specific painterly qualities of individual artists, though, you can ask for that. Here's some meat and fat and skin, with the painterly qualities of a bunch of 19th and 16th century painters.


[image removed]

  • Like 2
Link to comment

This is all extremely entertaining and all but I've not yet seen a single image actually useful for any production of anything to be hones.

  • Haha 1
Link to comment

I also tried training playform.io with all the Gothic textures from Quake 3 Arena. The results are interesting, and clearly have a similar feel to the source material. However, I would not call these images useful for anything:


  • Like 1
Link to comment

I didn't try midjourney yet, but it really interests me. I saw some really cool concept images made with it. However, I've been playing with Dall-e recently for some basic concept images.

Here's some test for some Horror Lovecraftian realistic images (I can't remember the actual keywords I've inputed)



However, just like @chsch because I wanted some horror related images, I got threaten of being banned from using DALL-E, even though I didn't ask for gory stuff.

  • Like 2
Link to comment

Generative Adversarial Networks (or GAN's for short) are the approach taken to get this kind of generative model using deep learning techniques. Supervised learning is the most common method of training GAN's which are usually made up of two sub-models, a discriminator and a counterfeiter. The counterfeiter has the role of generating fake images, where the discriminator then must classify those images as either real or fake. The ultimate goal is to converge training to a solution where the counterfeiter gets so good at generating fake examples the discriminator can then no longer classify the examples as real or fake.

My masters thesis (which I finally finished a few days ago) explored some pretty interesting areas of training supervised learning problems in the computer vision field. Most datasets are typically composed of real-world examples of images, its often an expensive and time-consuming task to create these datasets of images as it not only requires hardware in terms of camera equipment but also heavy investment of valuable time to annotate the images (supervised learning for object detection involves labelling images which is the practice of drawing bounding boxes around the objects within the image in order to provide the model labelled data to learn from). Automating the entire process of data collection is the pursuit of happiness. Applying technologies such as game engines to render the images and also program the logic for automatic annotation of objects in a scene provides an end-to-end approach for data collection. The possibilities to simulate abstract imagery within a game engine opens up the doors for high-quality data that is collected in an efficient way for building large-scale datasets. Synthetic data could improve these kinds of models as we can simulate almost anything we like within a game engine. Feeding the resulting image data that is correctly formatted for model input could really push the results these types of models can achieve.

Check out differentiable rendering, that is crazy stuff and the generalization power is improving over the years 


  • Like 4
Link to comment
On 9/9/2022 at 6:06 PM, Josh said:

@chsch Can you train centipede diffusion with your own images?

technically, yes- but realistically, no. There was a person who offered custom trained models though, which resulted in a few very weird image generators . Ukiyo-e diffusion was relatively "useful", but she also released a thing called "textile diffusion" which would create everything as stitched into fabric. She was doing this fo a few hundred bucks if you let her release the model publicly, and a few hundred more if you wanted to keep the model to yourself. - She doesn't do that anymore, though, she got hired by Stable Diffusion.

Stable DIffusion, as I found out by now, can do tiled textures, and it can do image to image. Meaning, you can use it as a filter that creates variations of a given image- or texture. Say, you have one texture of rocks from texturehaven - you can make variations of that, photorealistic or stylized.

Stable diffusion also offers a thing called textual inversion, where you can train it on a handful of images to "learn" a concept, and assign a variable nameto that concept, so you can then ask it to make images of 'variable name' in the style of something else, or to create whatever, in the style of 'variable name'.

It's not quite training a whole model, and I haven't tried out how well it works - but right now, the only thing that makes this not absolutely perfect for the job is that generating images larger than 768x768 takes more VRAM than google colab offers me.

Link to comment
On 9/9/2022 at 9:22 PM, Josh said:

I also tried training playform.io with all the Gothic textures from Quake 3 Arena. The results are interesting, and clearly have a similar feel to the source material. However, I would not call these images useful for anything:



I do have a fine art education, and honestly, as far as capital A Art is concerned, these are great.

  • Haha 1
Link to comment
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...