chsch
-
Posts
11 -
Joined
-
Last visited
Content Type
Blogs
Forums
Store
Gallery
Videos
Blog Comments posted by chsch
-
-
tiling is a feature of SD, though and has been a mere checkbox away for a while. PBR is likely going to be messy, though, but I was quite impressed by a model trained to create depth maps from pictures of trees.... in the meantime: SD and materialize are now all you need.
Also, there's a blender plugin to create tiling textures from within blender.
-
On 9/9/2022 at 9:22 PM, Josh said:
I also tried training playform.io with all the Gothic textures from Quake 3 Arena. The results are interesting, and clearly have a similar feel to the source material. However, I would not call these images useful for anything:
I do have a fine art education, and honestly, as far as capital A Art is concerned, these are great.
- 1
-
On 9/9/2022 at 6:06 PM, Josh said:
@chsch Can you train centipede diffusion with your own images?
technically, yes- but realistically, no. There was a person who offered custom trained models though, which resulted in a few very weird image generators . Ukiyo-e diffusion was relatively "useful", but she also released a thing called "textile diffusion" which would create everything as stitched into fabric. She was doing this fo a few hundred bucks if you let her release the model publicly, and a few hundred more if you wanted to keep the model to yourself. - She doesn't do that anymore, though, she got hired by Stable Diffusion.
Stable DIffusion, as I found out by now, can do tiled textures, and it can do image to image. Meaning, you can use it as a filter that creates variations of a given image- or texture. Say, you have one texture of rocks from texturehaven - you can make variations of that, photorealistic or stylized.
Stable diffusion also offers a thing called textual inversion, where you can train it on a handful of images to "learn" a concept, and assign a variable nameto that concept, so you can then ask it to make images of 'variable name' in the style of something else, or to create whatever, in the style of 'variable name'.
It's not quite training a whole model, and I haven't tried out how well it works - but right now, the only thing that makes this not absolutely perfect for the job is that generating images larger than 768x768 takes more VRAM than google colab offers me.
-
I've been playing with these for a few months now ... these image generators will put a lot of people out of a job. Including me, to some extent.
anyway: Dall-e and midjourney gave me mixed results, and I got threatned to get banned for trying to get some gory textures. Stable Diffusion is the new kid in town, and it can be run locally, on colab, or on their website, and the code is open source, so you can just bypass the NSFW filter and get all the blood and guts you want. also boobs, if you are okay with them having three nipples, or no nipples, occasionally. Texture generation has also been disappointing, so far.
but then there's centipede diffusion, a predecessor of stable diffusion. runs on colab and creates a bunch of small images, quickly, and uses those then as input images for the diffusion, and finally does upscaling. here, you can actually get very good textures, - you'll have to make them tilable afterwards, and the process is very slow. But since it doesn't run on your GPU, but on one of Google's. It's also much easier to control the style - if you really want a specific quality in your image - it helps to know the specific painterly qualities of individual artists, though, you can ask for that. Here's some meat and fat and skin, with the painterly qualities of a bunch of 19th and 16th century painters.
[image removed]
- 2
AI-Generated Game Textures and Concept Art
in Development Blog
A blog by Josh in General
Posted
it's there in the AUtomatic 1111 webui, together with A LOT of other options.