Jump to content

chsch

Members
  • Posts

    11
  • Joined

  • Last visited

Blog Comments posted by chsch

  1. On 9/9/2022 at 6:06 PM, Josh said:

    @chsch Can you train centipede diffusion with your own images?

    technically, yes- but realistically, no. There was a person who offered custom trained models though, which resulted in a few very weird image generators . Ukiyo-e diffusion was relatively "useful", but she also released a thing called "textile diffusion" which would create everything as stitched into fabric. She was doing this fo a few hundred bucks if you let her release the model publicly, and a few hundred more if you wanted to keep the model to yourself. - She doesn't do that anymore, though, she got hired by Stable Diffusion.

    Stable DIffusion, as I found out by now, can do tiled textures, and it can do image to image. Meaning, you can use it as a filter that creates variations of a given image- or texture. Say, you have one texture of rocks from texturehaven - you can make variations of that, photorealistic or stylized.

    Stable diffusion also offers a thing called textual inversion, where you can train it on a handful of images to "learn" a concept, and assign a variable nameto that concept, so you can then ask it to make images of 'variable name' in the style of something else, or to create whatever, in the style of 'variable name'.

    It's not quite training a whole model, and I haven't tried out how well it works - but right now, the only thing that makes this not absolutely perfect for the job is that generating images larger than 768x768 takes more VRAM than google colab offers me.

  2. I've been playing with these for a few months now ... these image generators will put a lot of people out of a job. Including me, to some extent.

    anyway: Dall-e and midjourney gave me mixed results, and I got threatned to get banned for trying to get some gory textures. Stable Diffusion is the new kid in town, and it can be run locally, on colab, or on their website, and the code is open source, so you can just bypass the NSFW filter and get all the blood and guts you want. also boobs, if you are okay with them having three nipples, or no nipples, occasionally. Texture generation has also been disappointing, so far.

    but then there's centipede diffusion, a predecessor of stable diffusion. runs on colab and creates a bunch of small images, quickly, and uses those then as input images for the diffusion, and finally does upscaling. here, you can actually get very good textures, - you'll have to make them tilable afterwards, and the process is very slow. But since it doesn't run on your GPU, but on one of Google's. It's also much easier to control the style - if you really want a specific quality in your image - it helps to know the specific painterly qualities of individual artists, though, you can ask for that. Here's some meat and fat and skin, with the painterly qualities of a bunch of 19th and 16th century painters.

     

    [image removed]

    • Like 2
×
×
  • Create New...