Jump to content

Ultra Engine testing


Josh
 Share

Recommended Posts

Update is available with support for screen space ray tracing. Try this example:
https://www.ultraengine.com/learn/Camera_SetRayTracing

I don't currently have any way to control what appears in the probe render, so the dragon sometimes appears and sometimes does not. I think it depends on the thread timing right now, which is obviously not what we want. (He starts off hidden and gets shown after one call to world::Render.) But that's not the point right now, the point is we have real-time reflections with roughness that look great and run fast.

Also fixed a bug with the entity color not getting sent correctly to the shader sometimes.

  • Like 1

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

The post effects system is evolving some new capabilities, as I need this in order to write a more optimal roughness blur by rendering to different mipmap levels. Something like this:

{
    "postEffect":
    {
        "textureBuffers":
        [
            {
                "size": [0.25,0.25],
                "attachments"
                [
                    {
                        "format": 37,
                        "miplevels": 6
                    }
                ]
            },
            {
                "size": [0.25,0.25]
            },
            {
                "size": [0.25,0.25]
            }                            
        ],
        "subpasses": 
        [
            {
                "samplers"
                [
                    {
                        "textureBuffer": 0,
                        "attachment": 0
                    }
                ],
                "target": 0,
                "targetLevel":1,

Eventually, I would like to have a flowgraph design system for the post-processing effects pipeline.

  • Like 1

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Okay, this is how I want it to work. Get rid of the textureBuffers and just read / write to textures and miplevels explicitly, with support for multiple render targets. This is only plausible because of Vulkan's dynamic rendering feature:

{
    "postEffect":
    {
        "textures":
        [
            {
                "size": [0.5, 0.5],
                "miplevels": 1,
                "format": 97
            },
            {
                "size": [0.5, 0.5],
                "miplevels": 6,
                "format": 97
            }
        ],
        "subpasses": 
        [
            {
                "samplers": ["basecolor", "normal", "depth"],
                "targets": [0],
                "mipLevel": 0,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSR.frag.spv"
                    }
                }
            },
            {
                "samplers": [1],
                "targets": [1],
                "mipLevel": 1,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRDownsample.frag.spv"
                    }
                }
            },
            {
                "samplers": [1],
                "targets": [1],
                "mipLevel": 2,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRDownsample.frag.spv"
                    }
                }
            },
            {
                "samplers": [1],
                "targets": [1],
                "mipLevel": 3,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRDownsample.frag.spv"
                    }
                }
            },
            {
                "samplers": [1],
                "targets": [1],
                "mipLevel": 4,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRDownsample.frag.spv"
                    }
                }
            },
            {
                "samplers": [1],
                "targets": [1],
                "mipLevel": 5,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRDownsample.frag.spv"
                    }
                }
            },
            {
                "samplers": ["lastpass", 1],
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRResolve.frag.spv"
                    }
                }
            }         
        ]
    }
}

 

  • Like 2

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Okay, here is the actual working SSR post effect updated. It's very technical, but it allows you to do exactly what you want, including rendering to a specific mipmap, rendering to multiple textures in one subpass, and performing chains of rendering that were not possible before:

{
    "postEffect":
    {
        "textures":
        [
            {
                "size": [0.5, 0.5],
                "format": 97,
                "miplevels": 1
            },
            {
                "size": [0.5, 0.5],
                "format": 97,
                "miplevels": 1
            }          
        ],
        "subpasses": 
        [
            {
                "samplers": [ "PREVPASS", "DEPTH", "NORMAL", "METALLICROUGHNESS", "BASECOLOR" ],
                "colorAttachments": [0],
                "mipLevel": 0,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSR.frag.spv"
                    }
                }
            },
            {
                "samplers": [0, "DEPTH", "NORMAL", "METALLICROUGHNESS"],
                "colorAttachments": [1],
                "mipLevel": 0,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRBlurX.frag.spv"
                    }
                }
            },
            {
                "samplers": [1, "DEPTH", "NORMAL", "METALLICROUGHNESS"],
                "colorAttachments": [0],
                "mipLevel": 0,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRBlurY.frag.spv"
                    }
                }
            },
            {
                "samplers": [ "PREVPASS", 0, "DEPTH", "NORMAL", "METALLICROUGHNESS", "BASECOLOR" ],
                "mipLevel": 0,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRResolve.frag.spv"
                    }
                }
            }                                    
        ]
    }
}

 

  • Like 2

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

And here is what I wanted to do. Instead of performing a more expensive blur in two steps, this performs a blur and downsample, writing the blurred image into the mipchain so that it can be read in the final subpass using the roughness to determine the mip level to read. This was not possible before:

{
    "postEffect":
    {
        "deferIndirectLighting": true,
        "textures":
        [
            {
                "size": [0.25, 0.25],
                "format": 97,
                "miplevels": 4
            }            
        ],
        "subpasses": 
        [
            {
                "samplers": [ "PREVPASS", "DEPTH", "NORMAL", "METALLICROUGHNESS", "BASECOLOR" ],
                "colorAttachments": [0],
                "mipLevel": 0,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSR.frag.spv"
                    }
                }
            },
            {
                "samplers": [0, "DEPTH", "NORMAL", "METALLICROUGHNESS"],
                "colorAttachments": [0],
                "mipLevel": 1,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRDownsample.frag.spv"
                    }
                }
            },
            {
                "samplers": [0, "DEPTH", "NORMAL", "METALLICROUGHNESS"],
                "colorAttachments": [0],
                "mipLevel": 2,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRDownsample.frag.spv"
                    }
                }
            },
            {
                "samplers": [0, "DEPTH", "NORMAL", "METALLICROUGHNESS"],
                "colorAttachments": [0],
                "mipLevel": 3,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRDownsample.frag.spv"
                    }
                }
            },           
            {
                "samplers": [ "PREVPASS", 0, "DEPTH", "NORMAL", "METALLICROUGHNESS", "BASECOLOR" ],
                "mipLevel": 0,
                "shader":
                {
                    "float32":
                    {
                        "fragment": "Shaders/SSRResolve.frag.spv"
                    }
                }
            }                                    
        ]
    }
}

Right now it's just doing a dumb downsample so it needs a little more work, but framerate increased by about 20% and it's a lot more scalable (bigger blur radius doesn't cost much performance):

Untitled.thumb.jpg.2061b77269c19247ef5263c2e0d1b5ee.jpg

  • Like 2

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Here you can see sharp reflections on the dragon (the orange box is being reflected with SSR) combined with very blurry rough reflections on the floor. This would also work for hardware ray tracing. I wonder what would happen if I tried plugging my voxel data into this system?

Untitled.thumb.jpg.326410bf313de898d7190824e207c78a.jpg

  • Like 3

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Update:

  • Improved SSR
  • Environment probes and skybox will now appear if SSR is disabled
  • Revised post-effects system. I only updated the SSR effect, no other post-effects will work

This sample shows what I was working on: https://www.ultraengine.com/learn/Camera_SetRayTracing

I am very interested now to see if voxels can be used with this approach for specular reflection. I was getting good results with that aspect of it earlier, even when I was rasterizing simple geometry on the CPU. (Note the green tint from DXT compression.)

 

  • Like 2

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

As predicted, the voxelization itself has little impact on framerate. Here it is with SSR disabled and the voxelization rendering each frame. You can't see anything except the envionment probe because I am not doing anything with the voxel data yet.

Untitled.thumb.jpg.f8685414cf718b7fedc805c34b5e1d21.jpg

  • Like 2

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

It's looking pretty good.  Once the loading of materials are working again I will test this out with some of my own models.

Small query for you - I'm sick of VS2022, it's always crashing and restarting even on small projects, and intellisense stops working correctly after a few hours.  I want to use VS2019 again but can only compile with the v142 toolset instead of the 143 in 2022.  I get one error in doing so;

1>UltraEngine_d.lib(Skeleton.obj) : error LNK2019: unresolved external symbol __std_find_trivial_4 referenced in function "int * __cdecl __std_find_trivial<int,int>(int *,int *,int)" (??$__std_find_trivial@HH@@YAPEAHPEAH0H@Z)

I take it this can't be done if you've compiled Ultra with v143?

Link to comment
Share on other sites

Performance of initial attempt is not bad. This is totally real-time with no latency, so animation and moving objects is no problem.

Untitled.thumb.jpg.2b80a24ed05fdce82be621939e3e865f.jpg

  • Like 1

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

And here is what it looks like when it is run through the screen space filter. It flickers a little bit, but looks about the same as SSR except that offscreen stuff gets reflected.

Untitled.thumb.jpg.26957f4840375e4ce188650e0cd79482.jpg

  • Like 2

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Screen-space roughness is much more effective than voxel cone step tracing at preventing light leaks. There's no environment probe in use here.

Untitled.thumb.jpg.94e78332a5447aa615d6d255833f1cb8.jpg

  • Like 1

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Here is specular and diffuse reflection. No environment probe is used. You can see the light bounce onto the ceiling, together with the sharp reflections everywhere.

Untitled.thumb.jpg.42c6235e469e44cfbd47e0e3b7dc1a62.jpg

  • Like 4

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

I have some conclusions:

  • Voxels cannot be used for real-time diffuse light, either with screen space roughness or cone tracing, due to image stability and light leak issues.
  • Voxels can be rasterized each frame but changes to light positions or dynamic objects will cause image stability problems.
  • Dynamic objects can have their own voxel texture that moves with them.
  • The static scene can smoothly blend between voxelization results with some latency, to eliminate image instability / jitter.
  • Voxels can handle real-time specular reflections with good performance.

So you still need environment probes an GI lightmapping, even if voxel reflections are used. Having established this, I think I will put the voxel stuff on hold for now and come back to it later. I think it has good potential, but I am mainly interested in solving these important questions now before anything is released, because they affect the whole design and future development.

  • Like 4

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

We're going to use exponential shadow maps instead of conventional depth or variance shadow maps. VSMs have some bad artifacts, but ESM seems to always work as expected. It's slight faster than a depth shadowmap with a 16x16 filter, and slightly slower at redrawing shadows, since it has to blur the shadow image. The most important thing is they store depth in a linear float texture and totally eliminate shadow acne, so you won't have to fiddle around with depth bias settings.

  • Like 1

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

5 hours ago, Genebris said:

So no realtime gi after all 😭

In the initial release I think we should stick to SSR + environment probes, since these are needed no matter what. However, I have determined a lot of useful information that can be used to further develop this.

GI can be divided into two types of like. Diffuse reflection is the light coming from every direction, and does not change with the eye position. A flat white wall with a rough (non-shiny) surface is a good example of this. Since this light is affected by photons coming from every direction it is very hard to calculate in real-time with any degree of accuracy.

Specular reflection, on the other hand, consists of a cone of photons bouncing off the surface and hitting the viewer's eye. These are the sharp or rough reflections that move with the camera position. A reflective car or softly reflective smooth concrete on a city street are examples of this. This type of light can be calculated more easily because the photons are all coming from one direction, and once the cone angle gets to a certain width, say 35 degrees, it becomes indistinguishable from diffuse light and can be skipped.

I don't think voxels can handle dynamic diffuse reflections, outside of some carefully constricted demos. There are too many problems with light leaks and artifacts, and I don't think those can ever be solved to make a general-purpose useable solution. There's also a problem that GI only looks good with two light bounces, which adds a lot to the processing cost. GI lightmapping for the low-end and maybe RTX for the high-end are the only solutions for diffuse reflection. Maybe it will be possible to take some of the techniques that are being used for RTX, like temporal filtering, and port those back into the voxel solution, but I can't say yet.

I do think that voxels can handle specular reflection very well. What I have right now can't be released as-is, but with some more work I think it would work very well. This would allow you to walk around your scene and have accurate offscreen reflections everywhere you go, at a reasonable framerate. I think storing a signed distance field could eliminate the blockiness of sharp voxel reflections. The screen space roughness technique I worked out for SSR can be reused for voxels and it actually works better than cone step tracing. You will still need environment probes and possibly GI lightmapping in the future, but it will give you the "magic" of real-time reflections at good framerates. The important thing is I know how everything works now, so it is safe to release the first version without worrying about breaking changes as development continues.

So having figured all that out, should I spend another six weeks on voxel reflections, or should I release the first version before Christmas? I want to release the engine this year.

  • Like 3

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

I had another idea to just trace one ray per voxel each frame and save the results frame to frame. I believe that would work extremely well for the scene static geometry, and actually would handle diffuse GI. I think it would probably solve the image stabilization problems too. But I want to get the basic non-voxel version released first.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

You are absolutely right that it's better to release now. I just know that they have some forms of VXGI in other engines including several third party open source plugins. So the idea must not be that bad. I personally wouldn't mid if light was propagating over time like water in Minecraft. But this feature is something that really can make your game stand out.

Link to comment
Share on other sites

A lot of those implementations are not really useful, or useful only under very special circumstances. I have looked into them to see what they were doing and it was pretty disappointing. For example, Unigine has a mode that uses a brute force baked volume texture, and it looks very nice, but when you calculate the memory requirements you realize it's of very limited use. And then if you look on the forums of everyone using voxels, including Crytek, the users are all complaining about the resolution and artifacts. Ours solution needs to be significantly better than that.

  • Like 3

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

  • Josh changed the title to Ultra Engine testing
  • Josh locked this topic
Guest
This topic is now closed to further replies.
 Share

×
×
  • Create New...