Jump to content

Josh

Staff
  • Posts

    23,352
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    The beta branch has been updated. The following changes have been made:
    Rolled beta branch back to release version, with changes below. Added new FBX converter. Fixed Visual Studio project template debug directory. Fixed Visual Studio project template Windows Platform SDK version problem. If everything is okay with this then it will go out on the default branch soon.
  2. Josh
    I finally have a cool screenshot to show you of our new real-time global illumination working.

    Here is a comparison screenshot showing direct lighting only:

    Now there are still lots of small issues to worry about. Right now I am only using a single cone trace. More cones will improve accuracy, but I think light leaking is just always going to be a fact of life with this technique. Still, the results looks great, require no precalculation, respond to environment changes, and don't require any setup. Win!
  3. Josh
    Now that we have our voxel light data in a 3D texture we can generate mipmaps and perform cone step tracing. The basic idea is to cast a ray out from each side of each voxel and use a lower-resolution mipmap for each ray step. We start with mipmap 1 at a distance that is 1.5 texels away from the position we are testing, and then double the distance with each steo of the ray. Because we are using linear filtering we don't have to make the sample coordinates line up exactly to a texel center, and it will work fine:

    Here are the first results when cone step tracing is applied. You can see light bouncing off the floor and reflecting on the ceiling:

    This dark hallway is lit with indirect lighting:

    There's lots of work left to do, but we can see here the basic idea works.
  4. Josh
    I have successfully transferred lit voxel data into a 3D texture. The texture is now being used to display the lighting at each voxel. Soft edges are appearing due to linear filtering in the texture. To achieve this, I used an OpenGL 4.2 feature which allows you to write values into any arbitrary position in a texture. This could also be used for motion blur or fluid simulations in the future. However, since Mac support for OpenGL only goes up to 4.1, it means we cannot use real-time GI on a Mac, unless a separate workaround is written to handle this, or unless a renderer is written using Vulkan / Metal. At the time I am going to stick with OpenGL, because it would be too hard to implement an experimental new architecture with an API I don't know much about.


    Now that we have our lit voxel 3D texture it should be possible to generate mipmaps and begin cone tracing to simulate global illumination.
  5. Josh
    We left off on voxels when I realized the direct lighting needed to be performed on the GPU. So I had to go and implement a new clustered forward renderer before I could do anything else. Well, I did that and now I finally have voxel lighting calculation being performed with the same code that renders lighting. This gives us the data we need to perform cone step tracing for real-time dynamic global illumination.

    The shadows you see here are calculated using the scene shadowmaps, not by raycasting other voxels:

    I created a GPU timer to find out how much time the lighting took to process. On the CPU, a similar scene took 368 milliseconds to calculate direct lighting. On the GPU, on integrated graphics, (so I guess it was still the CPU!) this scene took 11.61064 milliseconds to process. With a discrete GPU this difference would increase, a lot. So that's great, and we're now at the third step in the diagram below:

    Next we will move this data into a downsampled cube texture and start performing the cone step tracing that gives us fast real-time GI.
  6. Josh
    Here are the results of the Summer Games Tournament. Make sure you update your mailing address, because posters are being sent out immediately!
    Invade
    The arcade classic "Space Invaders" has been re-imagined with modern graphics and cute 3D aliens!
    Constanta
    Constant is an abstract game about capturing cubes. Make sure you read the instructions!
    Death Rooms
    Procedurally generated levels and a lot of interesting rooms make this FPS worth trying. Watch out for traps!
     
  7. Josh
    After three days of intense work, I am proud to show you this amazing screenshot:

    What is so special about this image? I am now successfully uploading voxel data to the GPU and writing lighting into another texture, using a texture buffer object to store the voxel positions as unsigned char uvec3s. The gray color is the ambient light term coming from the Blinn-Phong shading used in the GI direct light calculation. The next step is to create a light grid for the clustered forward renderer so that each light can be added to the calculation. Since voxel grids are cubic, I think I can just use the orthographic projection method to split lights up into different cells. In fact, the GI direct light shader actually includes the same lighting shader file that all the model shaders use. Once I have that done, that will be the direct lighting step, and then I can move on to calculating a bounce with cone step tracing.
    Clustered forward rendering, real-time global illumination, and physically-based rendering are all going to come together really nicely, but this is definitely one of the hardest features I have ever worked on!
    Here are a few wacky screenshots from the last few days.
    Why are half my voxels missing?!

    Why is only one texture layer being written to?!

    Ah, finally rendering to six texture layers simultaneously...

  8. Josh
    I have shadow caching working now in Turbo. This feature is already in Leadwerks Game Engine 4. The idea is that static scene geometry should not be redrawn when a dynamic object moves. Imagine a character (6000 polys) walking across a highly detailed room (100,000 polys), with one point light in the room. If we mark the scene geometry as static and the character as dynamic, then we can render a shadow map cache of the static scene once. When the character moves, the static cache is copied into the rendering buffer, and then the character is drawn on top of that, instead of re-rendering the entire scene. When used correctly, this will make a huge difference in the amount of geometry the renderer has to draw to update lighting.
    Here is my test. The helmet spins, causing the point light shadow to re-draw. The surrounding scene is marked as static and consists of 260,000 polys. By using shadow caching we can reduce scene polys rendered from about 540,000 to 280,000, in this case.

    I actually changed the light type to a point light, which is more demanding since it uses six passes to oover all faces of the cubemap. After performing optimizations, the test runs at 180 FPS, with a point light, with shadow caching enabled. Without shadow caching it ran at about 118. This is with Intel integrated graphics, so a discrete card is sure to be much faster.
    I also found that using variance shadow maps and multisampled shadows DO make a big difference in performance on Intel graphics (about half the framerate with 4X MSAA VSMs), but I don't think they will make any difference on a high-end card.
    There is still a bit of an issue with shadow updates syncing with the rendering thread, but all in all it was a good day's work.
     
  9. Josh
    An online implementation of physically-based rendering in the Khronos Github was pointed out to me by @IgorBgz90 and @shadmar. This is very useful because it's an official implementation of PBR that removes a lot of guesswork. Here is my first attempt, which is not using any cubemap reflections:
    And here it is with cubemap reflections added:
    I plan to use the real-time global illumination system to generate the reflection data, instead of using environment probes. This will provide more realistic lighting that responds dynamically to changes in the environment. Thanks again to the devs who showed me this, along with the implementation they were working on.
    Here's one final revision:
     
  10. Josh
    The Model class is being slightly restructured to add support for built-in LOD without the need for separate entities. Previously, a list of surfaces was included in the Model class itself:
    class Model { std::vector<shared_ptr<Surface> > surfaces; }; This is being replaced with a new LOD class, which allows multiple lists of surfaces containing less detail to be stored in the same model:
    class LOD { std::vector<shared_ptr<Surface> > surfaces; }; class Model { std::vector<LOD> lods; }; To iterate through all surfaces in the first LOD, you do this:
    for (int i = 0; i < model->lods[0].surfaces.size(); ++i) { auto surf = lods[0].surfaces[i]; } To iterate through all LODs and all surfaces, you do this:
    for (int n = 0; n < model->lods.size(); ++n) { for (int i = 0; i < model->lods[n].surfaces.size(); ++i) { auto surf = lods[n].surfaces[i]; } } In the future editor, I plan to add a feature to automatically reduce the detail of a mesh, adding the simplified mesh as an additional LOD level so you can automatically generate these.
    How this will work with our super-efficient batching system, I am not sure of yet.
  11. Josh
    With the help of @martyj I was able to test out occlusion culling in the new engine. This was a great chance to revisit an existing feature and see how it can be improved. The first thing I found is that determining visibility based on whether a single pixel is visible isn't necessarily a good idea. If small cracks are present in the scene one single pixel peeking through can cause a lot of unnecessary drawing without improving the visual quality. I changed the occlusion culling more to record the number of pixels drawn, instead just using a yes/no boolean value:
    glBeginQuery(GL_SAMPLES_PASSED, glquery); In OpenGL 4.3, a less accurate but faster GL_ANY_SAMPLES_PASSED_CONSERVATIVE (i.e. it might produce false positives) option was added, but this is a step in the wrong direction, in my opinion.
    Because our new clustered forward renderer uses a depth pre-pass I was able to implement a wireframe rendering more that works with occlusion culling. Depth data is rendered in the prepass, and the a color wireframe is drawn on top. This allowed me to easily view the occlusion culling results and fine-tune the algorithm to make it perfect. Here are the results:
    As you can see, we have pixel-perfect occlusion culling that is completely dynamic and basically zero-cost, because the entire process is performed on the GPU. Awesome!
  12. Josh
    Lighting is nearly complete. and it is ridiculously fast! There are still some visual glitches to work out, mostly with lights intersecting the camera near plane, but it's nearly perfect. I turned the voxel tree back on to see what the speed was, and to check if it was still working, and I saw this image of the level partially voxelized. The direct lighting shader I am using in the rest of the scene will be used to calculate lighting for each voxel on the GPU, and then bounces will be performed to quickly calculate approximate global illumination. This is fun stuff!

  13. Josh
    A map viewer application is now available for beta subscribers. This program will load any Leadwerks map and let you fly around in it, so you can see the performance difference the new renderer makes. I will be curious to hear what kind of results you see with this:
    Program is not tested with all hardware yet, and functionality is limited.
  14. Josh
    You can now view detailed sales records of your game assets in Leadwerks Marketplace. First, log into your Leadwerks account and navigate to the Leadwerks Marketplace main page. In the bottom-right, below the categories, a link to your paid files will appear.

    Here you can see a list of all your paid items:

    When you click on an item, you can see a list of people who have purchased it, along with sales dates.

    If you wish to give a free license to any member for any reason, you can do so by clicking the "Generate Purchase" button. A window will pop up where you can type in the member's name and add the item to their account for free.

    These tools give you more control over your game assets and better information on sales.
  15. Josh
    I have map loading working now. The LoadMap() function has three overloads you can use::
    shared_ptr<Map> LoadMap(shared_ptr<World> world, const std::string filename); shared_ptr<Map> LoadMap(shared_ptr<World> world, const std::wstring filename); shared_ptr<Map> LoadMap(shared_ptr<World> world, shared_ptr<Stream> stream); Instead of returning a boolean to indicate success or failure, the LoadMap() function returns a Map object. The Map object gives you a handle to hang onto all the loaded entities so they don't get instantly deleted. When you want to clear the map, you can just set this variable to nullptr/NULL:
    auto map = LoadMap(world,"Maps/start.map"); map = nullptr; //BOOM!!! The "entities" member of the map object gives you a list of all entities loaded in the map:
    auto map = LoadMap(world,"Maps/start.map"); for (auto entity : map->entities) { //do something to entity } If you want to clear a map but retain one of the loaded entities, you just set it to a new variable like this. Notice we grab the camera, clear the map, but we still can use the camera:
    auto map = LoadMap(world,"Maps/start.map"); shared_ptr<Camera> cam; for (auto entity : map->entities) { cam = dynamic_pointer_cast<Camera>(entity); if (cam) break; } map = nullptr; //BOOM!!! cam->SetPosition(1,2,3); //everything is fine Materials and shader assignment has gotten simpler. If no material is assigned, a blank one will be auto-generated in the rendering thread. If a material has no shader assigned, the rendering thread will choose one automatically based on what textures are present. For example, if texture slots one and two are filled then the rendering thread will choose a shader with diffuse and normal maps. In most cases, you don't even need to bother assigning a shader to materials. I might even add separate animation and static shader slots, in which case materials could work for animated or non-animated models, and you wouldn't normally even need to specify the shader.
    Shaders now support include directives. By using a pragma statement we can indicate to the engine which file to load in, and the syntax won't trigger an error in Visual Studio Code's syntax highlighter:
    #pragma include Lighting.glsl Shader includes allow us to create many different shaders, while only storing the complicated lighting code in one file that all other shaders include. The #line directive is automatically inserted into the shader source at every line, so that the engine can correctly detect which file and line number any errors originated from.
    With this all working, I can now load maps side by side in Leadwerks 4 and in the new renderer and get actual performance benchmarks. Here's the first one, showing the example map "02-FPS Controller.map" from the First-Person Shooter game template. In Leadwerks 4, with Intel HD 4000 graphics, we get 71 FPS. (Yes, vertical sync is disabled).

    And with the new forward renderer we get a massive 400%+ increase in performance:

    I expect the results will vary a little bit across different hardware, but we can see already that on the low-end hardware the new renderer is a massive improvement.
    I plan to get a new build of the beta up soon so that you can try your own maps out and test the difference. Physics and scripts are presently disabled, as these systems need additional work to be usable.
    Oh, and look how much cleaner those shadow edges are!


  16. Josh
    By modifying the spotlight cone attenuation equation I created an area light, with shadow.

    And here is a working box light The difference here is the box light uses orthographic projection and doesn't have any fading on the edges, since these are only meant to shine into windows.

    If I scale the box light up and place it up in the sky, it kind of looks like a directional light. And it kind of is, expect a directional light would either use 3-4 different box lights set at radiating distances from the camera position (cascaded shadow maps) or maybe something different. We have a system now that can handle a large number of different lights so I can really arrange a bunch of box lights in any way I want to cover the ground and give good usage of the available texels.

    Here I have created three box lights which are lighting the entire courtyard with good resolution.

    My idea is to create something like the image on the right. It may not look more efficient, but in reality the majority of pixels in cascaded shadow maps are wasted space because the FOV is typically between 70-90 degrees and the stages have to be square. This would also allow the directional light to act more like a point or spot light. Only areas of the scene that move have to be updated instead of drawing the whole scene three extra times every frame. This would also allow the engine to skip areas that don't have any shadow casters in them, like a big empty terrain (when terrain shadows are disabled at least).

    Spot, and area lights are just the same basic formula of a 2D shadowmap rendered from a point in space with some direction. I am trying to make a generic texture coordinate calculation by multiplying the global pixel position by the shadow map projection matrix times the inverse light matrix, but so far everything I have tried is failing. If I can get that working, then the light calculation in the shader will only have two possible light types, one for pointlights which use a cube shadowmap lookup, and another branch for lights that use a 2D shadowmap.
  17. Josh
    Some of you are earning money selling your game assets in Leadwerks Marketplace. This quick article will show you how to request a payout from the store for money you have earned. First, you need to be signed into your Leadwerks account.
    Click the drop-down user menu in the upper right corner of the website header and click on the link that says "Account Balance".

    On the next page you can see your account balance. As long as it is $20 or more you can withdraw the balance into your PayPal account by hitting the "Withdraw Funds" button.

    Now just enter your PayPal email address and press the "Withdraw" button.

    After that the withdrawal will be deducted from your balance and the withdrawal request will show in your account history. Shortly after that you will receive the funds in your PayPal account.

    You can sell your game assets in Leadwerks Marketplace and earn a 70% commission on each transaction.
  18. Josh
    I added spotlights to the forward clustered renderer. It's nothing too special, but it does demonstrate multiple light types working within a single pass.

    I've got all the cluster data and the light index list packed into one texture buffer now. GPU data needs to be aligned to 16 bytes because everything is built around vec4 data. Consequently, some of the code that handles this stuff is really complicated. Here's a sample of some of the code that packs all this data into an array.
    for (auto it = occupiedcells.begin(); it != occupiedcells.end(); it++) { pos = it->first; visibilityset->lightgrid[pos.z + pos.y * visibilityset->lightgridsize.x + pos.x * visibilityset->lightgridsize.y * visibilityset->lightgridsize.x] = visibilityset->lightgrid.size() / 4 + 1; Assert((visibilityset->lightgrid.size() % 4) == 0); for (int n = 0; n < 4; ++n) { visibilityset->lightgrid.push_back(it->second.lights[n].size()); } for (int n = 0; n < 4; ++n) { if (!it->second.lights[n].empty()) { visibilityset->lightgrid.insert(visibilityset->lightgrid.end(), it->second.lights[n].begin(), it->second.lights[n].end()); //Add padding to make data aligned to 16 bytes int remainder = 4 - (it->second.lights[n].size() % 4); for (int i = 0; i < remainder; ++i) { visibilityset->lightgrid.push_back(0); } Assert((visibilityset->lightgrid.size() % 4) == 0); } } } And the shader is just as tricky:
    //------------------------------------------------------------------------------------------ // Point Lights //------------------------------------------------------------------------------------------ countlights = lightcount[0]; int lightgroups = countlights / 4; if (lightgroups * 4 < countlights) lightgroups++; int renderedlights = 0; for (n = 0; n < lightgroups; ++n) { lightindices = texelFetch(texture11, lightlistpos + n); for (i = 0; i < 4; ++i) { if (renderedlights == countlights) break; renderedlights++; lightindex = lightindices[n]; ... I plan to add boxlights next. These use orthographic projection (unlike spotlights, which us perspective) and they have a boundary defined by a bounding box, with no edge softening. They have one purpose, and one purpose only. You can place them over windows for indoor scenes, so you can have light coming in a straight line, without using an expensive directional light. (The developer who made the screenshot below used spotlights, which is why the sunlight is spreading out slightly.)

    I am considering doing away with cascaded shadow maps entirely and using an array of box lights that automatically rearrange around the camera, or a combination of static and per-object shadows. I hope to find another breakthrough with the directional lights and do something really special. For some reason I keep thinking about the outdoor scenery in the game RAGE and while I don't think id's M-M-MEGATEXTURES!!! are the answer, CSM seem like an incredibly inefficient way to distribute texels and I hope to come up with something better.

    Other stuff I am considering
    Colored shadows (that are easy to use). Volumetric lights either using a light mesh, similar to the way lights work in the deferred renderer, or maybe a full-screen post-processing effect that traces a ray out per pixel and calculates lighting at each step. Area lights (easy to add, but there are a lot of possibilities to decide on). These might be totally unnecessary if the GI system is able to do this, so I'm not sure. IES lighting profiles. I really want to find a way to render realistic light refraction, but I can't think of any way to do it other than ray-tracing:

    It is possible the voxel GI system might be able to handle something of this nature, but I think the resolution will be pretty low. We'll see.
    So I think what I will do is add the boxlights, shader includes, diffuse and normal maps, bug test everything, make sure map loading works, and then upload a new build so that subscribers can try out their own maps in the beta and see what the speed difference is.
  19. Josh
    Texture arrays are a feature that allow you to pack multiple textures into a single one, as long as they all use the same format and size. In reality, this is just a convenience feature that packs all the textures into a single 3D texture. It allows things like cubemap lookups with a 3D texture, but the implementation is sort of inconsistent. In reality it would be much better if we were just given 1000 texture units to use. However, these can be used to pack all scene shadow maps into a single texture so that they can be rendered in a single pass with the clustered forward renderer.
    The results are great and speed is very fast. However, there are some limitations. I said early on that my top priority with the design of the new renderer is speed. That means I will make decisions that favor speed over other flexibility, and here is a situation where we will see that in action. All scene shadow maps need to be packed into a single array texture of fixed size, which means there is a hard upper limit on total shadow-casting lights in the world.
    I've also discovered that my beautiful variance shadow maps use a ton of memory. At maximum quality they use an RGBA 32-bit floating point format, so that means a single 1024x1024 cubemap consumes 96 megabytes! (A standard shadow map at the same resolution uses 24 megabytes VRAM). Because all shadows are packed into a single texture, the driver can't even page the data in and out of video memory. If you don't have enough VRAM, you will get an OUT_OF_MEMORY error. So anticipating and handling this issue will be important. Hopefully I can just use appropriate defaults. I think I can cut the size of the VSMs down to 25%, but without the beautiful shadow scattering effect. Because the textures all have to be the same size, it is also impossible to set just one light to use higher resolution settings.
    If you want speed, I have to build more constraints into the engine. This is the kind of thing I was talking about. I want great graphics and the absolute fastest performance, so that is what Ia m doing.
    Okay, so with all that information and disclaimers out of the way, I give you the first shot showing multiple lights being rendered with shadows in a single pass in our new forward renderer.

    Here are three lights:

    And here I lowered the shadow map resolution and added 50 randomly placed lights. There are some artifacts and glitches, but it's still a pretty cool shot. All running in real-time, in a single pass:

    Keep in mind this is all before any indirect lighting has been added. The future looks bright!
  20. Josh
    Because variance shadow maps allow us to store pre-blurred shadow maps it also allows us to take advantage of multipled textures. MSAA is a technique that renders extra pixels around the target pixel and averages the results. This can help bring out fine lines that are smaller than a pixel onscreen, and it also greatly reduces jagged edges. I wanted to see how well this would work for rendering shadow maps, and to see if I could reduce the ragged edge appearance that shadow maps are sometimes prone to.
    Below is the shadow rendered at 1024x1024 with no multisampling and a 3x3 blur:

    Using a 4X MSAA texture eliminates the appearance of jagged edges in the shadow:

    Here they are side by side:

    This is very exciting stuff because we are challenging some of the long-held limitations of real-time graphics.
     
  21. Josh
    Shadows with a constant softness along their edges have always bugged me. Real shadows look like this. Notice the shadow becomes softer the further away it gets from the door frame.

    Here is a mockup of roughly what that shadow looks like with a constant softness around it. It looks so fake!

    How does this effect happen? There's not really any such thing as a light that all emits from a single point. The closest thing would be a very small bulb, but that still has volume. Because of this, shadows have a soft edge around them that gets less sharp the further away from the occluding object they are. I think some of this also has to do with photons hitting the edge of the object and scattering a bit as they go past it. The edge of the photon catches on the object and knocks it off course.

    We have some customers who need very realistic renderings, ideally as close to a photo as possible, and I wanted to see if I could create this behavior with our variance shadow maps. Here are the results: The shadows are sharp when they start being cast and become more blurry as light is scattered.

    Here's another shot. The shadows actually look real instead of just being blobby silhouettes.

    This is really turning out great!
  22. Josh
    After a couple days of work I got point light shadows working in the new clustered forward renderer. This time around I wanted to see if I could get a more natural look for shadow edges, as well as reduve or eliminate shadow acne. Shadow acne is an effect that occurs when the resolution of the shadow map is too low, and incorrect depth comparisons start being made with the lit pixels: By default, any shadow mapping alogirthm will look like this, because not every pixel onscreen has an exact match in the shadow map when the depth comparison is made:

    We can add an offset to the shadow depth value to eliminate this artifact:
    \
    However, this can push the shadow back too far, and it's hard to come up with values that cover all cases. This is especially problematic with point lights that are placed very close to a wall. This is why the editor allows you to adjust the light range of each light, on an individual basis.
    I came across a techniqe called variance shadow mapping. I've seen this paper years ago, but never took the time to implement it because it just wasn't a big priority. This works by writing the depth and depth squared values into a GL_RG texture (I use 32-bit floating points). The resulting image is then blurred and the variance of the values can be calculated from the average squared depth stored in the green channel.


    Then we use Chebyshev's inequality to get an average shadow value:

    So it turns out, statistics is actually good for something useful after all. Here are the results:

    The shadow edges are actually soft, without any graininess or pixelation. There is a black border on the edge of the cubemap faces, but I think this is caused by my calculated cubemap face not matching the one the hardware uses to perform the texture lookup, so I think it can be fixed.
    As an added bonus, this eliminates the need for a shadow offset. Shadow acne is completely gone, even in the scene below with a light that is extremely close to the floor.

    The banding you are seeing is added in the JPEG compression and it not visible in the original render.
    Finally, because the texture filtering is so smooth, shadowmaps look much higher resolution than with PCF filtering. By increasing the light range, I can light the entire scene, and it looks great just using a 1024x1024 cube shadow map.

    VSMs are also quite fast because they only require a single texture lookup in the final pass. So we get better image quality, and probably slightly faster speed. Taking extra time to pay attention to small details like this is going to make your games look great soon!
  23. Josh
    I got the remaining glitches worked out, and the deal is that clustered forward rendering works great. It has more flexibility than deferred rendering and it performs a lot faster. This means we can use a better materials and lighting system, and at the same time have faster performance, which is great for VR especially. The video below shows a scene with 50 lights working with fast forward rendering
    One of the last things I added was switching from a fixed grid size of 16x16x16 to an arbitrary layout that can be set at any time. Right now I have it set to 16x8x64, but I will have to experiment to see what the optimum dimensions are.
    There are a lot of things to add (like shadows!) but I have zero concern about everything else working. The hard part is done, and I can see that this technique works great.
  24. Josh
    In order to get the camera frustum space dividing up correctly, I first implemented a tiled forward renderer, which just divides the screen up into a 2D grid. After working out the math with this, I was then able to add the third dimension and make an actual volumetric data structure to hold the lighting information. It took a lot of trial and error, but I finally got it working.

    This screenshot shows the way the camera frustum is divided up into a cubic grid of 16x16x16 cells. Red and green show the XY position, while the blue component displays the depth:

    And here you can see the depth by itself, enhanced for visibility:

    I also added dithering to help hide light banding that can appear in gradients. Click on the image below to view it properly:

    I still have some bugs to resolve, but the technique basically works. I have no complete performance benchmarks yet to share but I think this approach is a lot faster than deferred rendering. It also allows much more flexible lighting, so it will work well with the advanced lighting system I have planned.
×
×
  • Create New...