Jump to content

chsch

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by chsch

  1. it's there in the AUtomatic 1111 webui, together with A LOT of other options.
  2. tiling is a feature of SD, though and has been a mere checkbox away for a while. PBR is likely going to be messy, though, but I was quite impressed by a model trained to create depth maps from pictures of trees.... in the meantime: SD and materialize are now all you need. Also, there's a blender plugin to create tiling textures from within blender.
  3. I do have a fine art education, and honestly, as far as capital A Art is concerned, these are great.
  4. technically, yes- but realistically, no. There was a person who offered custom trained models though, which resulted in a few very weird image generators . Ukiyo-e diffusion was relatively "useful", but she also released a thing called "textile diffusion" which would create everything as stitched into fabric. She was doing this fo a few hundred bucks if you let her release the model publicly, and a few hundred more if you wanted to keep the model to yourself. - She doesn't do that anymore, though, she got hired by Stable Diffusion. Stable DIffusion, as I found out by now, can do tiled textures, and it can do image to image. Meaning, you can use it as a filter that creates variations of a given image- or texture. Say, you have one texture of rocks from texturehaven - you can make variations of that, photorealistic or stylized. Stable diffusion also offers a thing called textual inversion, where you can train it on a handful of images to "learn" a concept, and assign a variable nameto that concept, so you can then ask it to make images of 'variable name' in the style of something else, or to create whatever, in the style of 'variable name'. It's not quite training a whole model, and I haven't tried out how well it works - but right now, the only thing that makes this not absolutely perfect for the job is that generating images larger than 768x768 takes more VRAM than google colab offers me.
  5. I've been playing with these for a few months now ... these image generators will put a lot of people out of a job. Including me, to some extent. anyway: Dall-e and midjourney gave me mixed results, and I got threatned to get banned for trying to get some gory textures. Stable Diffusion is the new kid in town, and it can be run locally, on colab, or on their website, and the code is open source, so you can just bypass the NSFW filter and get all the blood and guts you want. also boobs, if you are okay with them having three nipples, or no nipples, occasionally. Texture generation has also been disappointing, so far. but then there's centipede diffusion, a predecessor of stable diffusion. runs on colab and creates a bunch of small images, quickly, and uses those then as input images for the diffusion, and finally does upscaling. here, you can actually get very good textures, - you'll have to make them tilable afterwards, and the process is very slow. But since it doesn't run on your GPU, but on one of Google's. It's also much easier to control the style - if you really want a specific quality in your image - it helps to know the specific painterly qualities of individual artists, though, you can ask for that. Here's some meat and fat and skin, with the painterly qualities of a bunch of 19th and 16th century painters. [image removed]
  6. that would be a nice feature to have, to get the lighting from the backface to check whether something is lit from the back or in in the shadow. But that's translucency, and still not what subsurface scattering actually is meant to do. SSS means to simulate light bounces under the surface, exiting as reflected light - but not reflected on the surface but somewhere deeper, meaning it's a value that's dependent on a different fragment being lit. here's a rendering with blender's eevee of a cube with SSS on, hovering over a plane without SSS. there's translucency along the edge, but the interesting part is where the light cone cuts off the light, and you still get that orange-y glow. because some pixels nearby are being lit. Without this, skin looks pasty and dead. you can actually see the banding from a low number of samples (eevee allows you to increase it, but then, eevee is not designed to hit 60fps or more)
  7. yes... I've seen those, that's why I'm here ^-^
  8. though that would be cool, it's not what I meant, sorry. The volume would be great for transmission, i.e., when light and camera are on oppsite sides of an object, but I'm not sure what it would do if light and camera are on the same side, looking at a surface. I tried to find anything about SSS in the Khronos specifications, but only really found that an SSS extension is still upcoming. The problem with SSS, separable or not, is that usually it is done by rendering the lighting to a buffer and then convolving it. In a deferred renderer that's not much of an issue - just sample from the lighting pass a few times - but in a forward renderer it needs to be its own pass, as far as I know. I'm mentioning SSS specifically because unity's universal render pipeline doesn't come with it at all, and all third party assets are a bit clunky to use. I wrote my own SSS at some point, but it ended up just as clunky in practice.... btw. - these glTF specifications look quite intense on the shader instructions... how is Ultra-engine performing with these heavy PBR shaders?
  9. will there be subsurface scattering in Ultra Engine, and if so, what I'd personally like to see is the option to mix in a texture, rather than just have the separable SSS preintegrated kernel ( https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ ) everyone seems to be using. (Parallax-offset that texture and you got yourself some really goodlooking organic surfaces, particularly in VR, where you can actually see the depth).
  10. ... I can write shaders, I just don't do it often because I'm an artist by trade, and anything that doesn't require me to remember syntax is greatly speeding things up for the likes of me. I get why gamecode gets messy quickly with node based programming, but for shaders I consider it one of the nice convenciences of modern life...
  11. Hi, I'm one of those indie VR developers with at least some experience (app was commissioned, is in public beta, and now got a second round of funding- yay). One of the main problems I'm having with performance is that the VR market right now is mobile, and clients aren't interested in buying enthusiast gaming PCs when the VR-novelty-effect can be achieved with a standalone headset like the Quest2 or the Pico Neo3 or older 3DOF HMDs which can be used with zero introduction to the device itself. And I genuinely hate developing on mobile, tiled GPUs just make it really hard to create anything that's not looking like it comes from 2004 (i.e., no normal maps, no shadows, no post-processing). I'm interested in Ultra Engine and its promises, but I also don't know when or if there will be a PC-VR market anytime soon. Certainly for now, in particular the consumer-facing corporate stuff is all on mobile (and I'm hating it. PC-VR can be so gorgeous, and mobile-VR is a battle for every polygon). Is Ultra-engine going to run on android anytime soon? and if so, will any of its nice features run on mobile GPUs?
×
×
  • Create New...