Jump to content

Josh

Staff
  • Posts

    23,145
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    There's a discussion on the forum that sort of veered into talking about Khronos' GLTF file format specification:
    Some of this gave me some ideas for changes in the art pipeline in Turbo Game Engine. I was not feeling very concentrated today so I decided to do some easy work and implement a loader class:
    class Loader : public SharedObject { public: std::vector<wstring> extensions; virtual bool Reload(shared_ptr<Stream> stream, shared_ptr<SharedObject> o, const int flags = 0)=0; }; Then I created a TEX texture loader class:
    class TEXTextureLoader : public Loader { public: virtual bool Reload(shared_ptr<Stream> stream, shared_ptr<SharedObject> o, const int flags = 0)=0; }; When the engine is initialized it creates a TEXTexLoader object and saves it in a list of texture loaders:
    GameEngine::Initialize() { textureloaders.push_back(make_shared<TEXTextureLoader>()); } And the class constructor itself creates an array of file extensions the loader supports:
    TEXTextureLoader::TEXTextureLoader() { extensions = { "tex" }; } When a texture is loaded, the engine looks for a texture loader that matches the extension or the loaded file path.
    At this point, all we have done is restructured the engine to do the same thing it did before, with more code. But now we can add new loaders for other file formats. And that is exactly what I did. You can now load textures directly from PNG files:
    class PNGTextureLoader : public Loader { public: virtual bool Reload(shared_ptr<Stream> stream, shared_ptr<SharedObject> o, const int flags = 0)=0; }; I've done the same thing for model file loading as well, although I have not yet added support for any other file types.
    If we can add a C++ loader, we should be able to add one with Lua too. I haven't worked out the details yet, but it could work something like this:
    function LoadModelObj( stream, model ) while stream:EOF() == false do -------------- end end AddModelLoader(LoadModelObj) So with that, you could download a Lua script that would add support for loading a new file format.
    I think in the new engine we will support all common image formats, including Photoshop PSD, as well as DDS, KTX, and our existing TEX files. Textures loaded from image formats will be less optimal because mipmaps will be generated on loading. Settings like clamping and filter modes will be stored in the associated .meta files, which means these files will start getting published along with your game asset files. This is a change from the Leadwerks 4 way of doing things.
  2. Josh
    I'm back home now, relaxing after a long week away. And what a week it was!
    I got my lecture out of the way on Monday. I felt like I was pretty well prepared, but some acoustical problems in the hall really made it very difficult to concentrate, plus I was pretty nervous about talking in front of about 600 people. I did it though, and hope to do another one next year. The recorded lecture should be available in the GDC vault.
    During my talk, I described my experience as a PC developer, moving into mobile, and making a renderer to be cross-platform and scalable. I also showed off a deferred renderer running on the iPad. A couple of interesting things came out of this research. FIrst, I figured out a few details on how to render to a texture. This forms the basis of project shadows and post-effects on mobile. More importantly, I discovered that mobile hardware is better than PC hardware at rendering to textures! This is what the performance of a deferred and forward renderer looks like on the PC:

    A deferred renderer scales better, but takes an initial hit on performance, which is why Leadwerks 2 has such heavy hardware requirements, and the framerate never goes above 300 or so. It turns out, PowerVR graphics hardware for mobile can render to a texture at no cost. We get the scalability of a deferred renderer, without that initial hit on performance:

    I talked to some engineers with Mali, and they told me their graphics chips work the same way. The implications of this are pretty staggering. It means that per unit processing horsepower, mobile graphics hardware is actually more capable than PC graphics hardware, and AAA graphics on mobile are right around the corner.
    We set up our booth on Tuesday before the show. This was my first time going behind the scenes, so it was a totally different view of the conference. Before the expo floor opened, I asked the Oculus Rift guys if I could try their new toy out, and they let me demo it. I was only disappointed that they used a Mech-style game and the cockpit took up most of my peripheral vision, which eliminated any reason to look around and experience the full VR effect.
    I had a lot of meetings with different graphics hardware manufacturers, got some good technical info, and scored some free hardware for development and testing.
    The response we got from the attendees was really good. When we explained what Leadwerks 3 was, everyone got it right away, and were enthusiastic about what they saw. Here's some footage from the show:
    It was a push to get Leadwerks 3 out in time for the GDC, but well worth it. We got the word out about Leadwerks 3, got a lot of positive feedback and a better idea of what our target market needs, and put Leadwerks on the radar of some very big companies in the industry.
    Starting Monday, bug fixes have top priority, and I will start adding some graphical enhancements that will make a big enhancement to the renderer.
  3. Josh
    Leadwerks Game Engine 4.4 features an upgrade to the latest version of Newton Dynamics, along with a bunch of new features to enhance physics.
    Kinematic Controller
    The new kinematic controller is a joint that lets you specify a position, rotation (Euler or quaternion), or a 4x4 matrix to orient the body to.  You can set the maximum linear and angular force the joint may use to orient the entity.  This allows you to create a kinematic controller that only affects position, only affects rotation, or one that controls both at once.  In the video below I am using a kinematic controller to create a simple IK system with two hinge joints.  The end effector is controlled by the mouse position, while the base entity stays in place, since it has zero (infinite) mass:
    The kinematic controller provides much more stable collisions than the Entity PhysicsSetPosition() and PhysicsSetRotation() commands, and should be used in place of these.  In fact, these commands will be removed from the documentation and should not be used anymore, although they will be left in the engine to ensure your code continues to work.  The FPS player script will be updated to use a kinematic control for objects you are holding, which will eliminate the energetic collisions the script currently produces if you pick up a crate and push it into the wall.
    The new joint commands are as follows:
    static Joint* Kinematic(Entity* entity, const Vec3& position); virtual void SetTargetMatrix(const Mat4& mat); virtual void SetTargetPosition(const float x, const float y, const float z, const float blend = 0.5); virtual void SetTargetPosition(const Vec3& pos, const float blend = 0.5); virtual void SetTargetRotation(const float pitch, const float yaw, const float roll, const float blend = 0.5); virtual void SetTargetRotation(const Vec3& rotation, const float blend = 0.5); virtual void SetTargetRotation(const Quat& rotation, const float blend = 0.5); For improved constistency in the API, the joint SetAngle function will be renamed SetTargetAngle, but a copy of the old command will remain in the engine:
    virtual void SetTargetAngle(const float angle); Joint Friction
    Hinge joints can now accept a friction value to make them more resistant to swinging around.  I used this in the example below to make the joints less "loose", while a kinematic controller positions the green box:
    New Vehicle Model
    Newton 3.14 features a new vehicle model with a realistic simulation of a slip differential.  Power is adjusted to each wheel according to the resistance on each tire.

    Watch closely as the wheels below act just like a real car does when its tires slip:
    The realistic vehicle models gives vehicles a much more visceral and fun feeling.  The new vehicle also uses actual bodies for the tires, instead of convex raycasts, so the sudden bouncing the old vehicles could exhibit if the chassis didn't encompass the tires is eliminated.
    Springs
    Slider and hinge joints now have optional spring behavior you can enable with one command.  Use this to make our own custom suspension system, or anything else you need.
    void SetSpring(const float spring) These changes will be available next week on the beta branch on Steam.
  4. Josh
    As I work with the new engine more and more I keep finding new ways it makes life happy and productive.
    Smart Pointers
    I have talked about how great these are at length, but I keep finding new reasons I love them. The behind-the-scenes design has been a lot of fun, and it's so cool to be able to write lines of code like this without any fear of memory leaks:
    LoadSound("Sound/Music/fully_loaded_60.wav")->Play(); What do you think that code does? It plays a sound, keeps it in memory, and then unloads it when the sound finishes playing (assuming it is not loaded anywhere else). Smart Pointers make the new API almost magical to work with, and they don't have the performance overhead that garbage collection would, and they work great with Lua script.
    User Interface
    Leadwerks GUI will be used in our new editor, which allows me to innovate in many new ways. But we're also using Visual Studio Code for the script editor, which gives you a sleek modern scripting environment.

    Better Scene Management
    Cached shadow maps.are a feature in Leadwerks 4 that separate geometry into static and dynamic shadow-casting types. Static shadows are rendered into a cache texture. When the shadow updates only the dynamic objects are redrawn on top of the saved static cache. This requires that you set the light shadow mode to Dynamic|Static|Buffered. In the new engine this will be automatic. By default lights will use a shadow cache, and if the light moves after the first shadow render, the cache will be disabled. Any geometry can be marked as static in the new editor. Static objects are more optimal for lighting, navigation, and global illumination, and will not respond to movement commands. (This can also be used to mark which brushes should get merged when the scene is loaded).
    If you don't explicitly select whether an object in the scene should be static or not, the engine will guess. For example, any object with non-zero mass or a script attached to it should not be automatically marked as static.
    If you didn't understand any of that, don't worry! Just don't do anything, and your scene will already be running efficiently, because the engine makes intelligent choices based on your game's behavior.
    It's all turning out really nice.
  5. Josh
    I found and fixed the cause of the cubemap seams in variance shadow maps so we now have nice soft seamless shadows.

    I also changed the engine so that point lights use six 2D textures instead of a separate cubemap texture array. This means that all light types are sharing one big 2D array texture, and it frees up one texture slot. I am not sure if I want to have a hard limit on number of shadow-casting lights in the scene, or if I want to implement a system that moves lights in and out of a fixed number of shadowmap slots.
  6. Josh
    I've got the basic GI algorithm working but it needs a lot of work to be correct. I tend to do very well when the exact outcome is well-defined, but I am not as good at dealing with open-ended "artistic" programming. I may end up outsourcing the details of the GI shader to someone else, but the underlying data management is solid enough that I am not scared of it anymore.
    There's a lot of aspects of the design I'm not scared of anymore. We worked out smart pointers (including Lua integration), physically-based rendering, and most importantly the crazy ideas I had for the super efficient architecture work really well.
    At this point I think I am going to put the GI on hold, since I could play around with that endlessly, and focus on getting a new build out to the beta subscribers. We're going to just use a single skybox for ambient and specular reflections right now, and when it's ready GI and environment probes will provide that. 
    After that I think I will focus on the physics and navigation systems, exposing the entire API to Lua, and getting some of the outsourced work started. There's a few things I plan to farm out:
    Visual Studio Code Lua debugger GI details Weather system Water and clouds systems Everything else is pretty well under my control. This started out as an idea for an impossible design, but everything has fallen into place pretty neatly.
  7. Josh
    I finally have a cool screenshot to show you of our new real-time global illumination working.

    Here is a comparison screenshot showing direct lighting only:

    Now there are still lots of small issues to worry about. Right now I am only using a single cone trace. More cones will improve accuracy, but I think light leaking is just always going to be a fact of life with this technique. Still, the results looks great, require no precalculation, respond to environment changes, and don't require any setup. Win!
  8. Josh
    The beta branch has been updated. The following changes have been made:
    Rolled beta branch back to release version, with changes below. Added new FBX converter. Fixed Visual Studio project template debug directory. Fixed Visual Studio project template Windows Platform SDK version problem. If everything is okay with this then it will go out on the default branch soon.
  9. Josh
    Now that we have our voxel light data in a 3D texture we can generate mipmaps and perform cone step tracing. The basic idea is to cast a ray out from each side of each voxel and use a lower-resolution mipmap for each ray step. We start with mipmap 1 at a distance that is 1.5 texels away from the position we are testing, and then double the distance with each steo of the ray. Because we are using linear filtering we don't have to make the sample coordinates line up exactly to a texel center, and it will work fine:

    Here are the first results when cone step tracing is applied. You can see light bouncing off the floor and reflecting on the ceiling:

    This dark hallway is lit with indirect lighting:

    There's lots of work left to do, but we can see here the basic idea works.
  10. Josh
    I have successfully transferred lit voxel data into a 3D texture. The texture is now being used to display the lighting at each voxel. Soft edges are appearing due to linear filtering in the texture. To achieve this, I used an OpenGL 4.2 feature which allows you to write values into any arbitrary position in a texture. This could also be used for motion blur or fluid simulations in the future. However, since Mac support for OpenGL only goes up to 4.1, it means we cannot use real-time GI on a Mac, unless a separate workaround is written to handle this, or unless a renderer is written using Vulkan / Metal. At the time I am going to stick with OpenGL, because it would be too hard to implement an experimental new architecture with an API I don't know much about.


    Now that we have our lit voxel 3D texture it should be possible to generate mipmaps and begin cone tracing to simulate global illumination.
  11. Josh
    We left off on voxels when I realized the direct lighting needed to be performed on the GPU. So I had to go and implement a new clustered forward renderer before I could do anything else. Well, I did that and now I finally have voxel lighting calculation being performed with the same code that renders lighting. This gives us the data we need to perform cone step tracing for real-time dynamic global illumination.

    The shadows you see here are calculated using the scene shadowmaps, not by raycasting other voxels:

    I created a GPU timer to find out how much time the lighting took to process. On the CPU, a similar scene took 368 milliseconds to calculate direct lighting. On the GPU, on integrated graphics, (so I guess it was still the CPU!) this scene took 11.61064 milliseconds to process. With a discrete GPU this difference would increase, a lot. So that's great, and we're now at the third step in the diagram below:

    Next we will move this data into a downsampled cube texture and start performing the cone step tracing that gives us fast real-time GI.
  12. Josh
    Here are the results of the Summer Games Tournament. Make sure you update your mailing address, because posters are being sent out immediately!
    Invade
    The arcade classic "Space Invaders" has been re-imagined with modern graphics and cute 3D aliens!
    Constanta
    Constant is an abstract game about capturing cubes. Make sure you read the instructions!
    Death Rooms
    Procedurally generated levels and a lot of interesting rooms make this FPS worth trying. Watch out for traps!
     
  13. Josh
    After three days of intense work, I am proud to show you this amazing screenshot:

    What is so special about this image? I am now successfully uploading voxel data to the GPU and writing lighting into another texture, using a texture buffer object to store the voxel positions as unsigned char uvec3s. The gray color is the ambient light term coming from the Blinn-Phong shading used in the GI direct light calculation. The next step is to create a light grid for the clustered forward renderer so that each light can be added to the calculation. Since voxel grids are cubic, I think I can just use the orthographic projection method to split lights up into different cells. In fact, the GI direct light shader actually includes the same lighting shader file that all the model shaders use. Once I have that done, that will be the direct lighting step, and then I can move on to calculating a bounce with cone step tracing.
    Clustered forward rendering, real-time global illumination, and physically-based rendering are all going to come together really nicely, but this is definitely one of the hardest features I have ever worked on!
    Here are a few wacky screenshots from the last few days.
    Why are half my voxels missing?!

    Why is only one texture layer being written to?!

    Ah, finally rendering to six texture layers simultaneously...

  14. Josh
    I have shadow caching working now in Turbo. This feature is already in Leadwerks Game Engine 4. The idea is that static scene geometry should not be redrawn when a dynamic object moves. Imagine a character (6000 polys) walking across a highly detailed room (100,000 polys), with one point light in the room. If we mark the scene geometry as static and the character as dynamic, then we can render a shadow map cache of the static scene once. When the character moves, the static cache is copied into the rendering buffer, and then the character is drawn on top of that, instead of re-rendering the entire scene. When used correctly, this will make a huge difference in the amount of geometry the renderer has to draw to update lighting.
    Here is my test. The helmet spins, causing the point light shadow to re-draw. The surrounding scene is marked as static and consists of 260,000 polys. By using shadow caching we can reduce scene polys rendered from about 540,000 to 280,000, in this case.

    I actually changed the light type to a point light, which is more demanding since it uses six passes to oover all faces of the cubemap. After performing optimizations, the test runs at 180 FPS, with a point light, with shadow caching enabled. Without shadow caching it ran at about 118. This is with Intel integrated graphics, so a discrete card is sure to be much faster.
    I also found that using variance shadow maps and multisampled shadows DO make a big difference in performance on Intel graphics (about half the framerate with 4X MSAA VSMs), but I don't think they will make any difference on a high-end card.
    There is still a bit of an issue with shadow updates syncing with the rendering thread, but all in all it was a good day's work.
     
  15. Josh
    An online implementation of physically-based rendering in the Khronos Github was pointed out to me by @IgorBgz90 and @shadmar. This is very useful because it's an official implementation of PBR that removes a lot of guesswork. Here is my first attempt, which is not using any cubemap reflections:
    And here it is with cubemap reflections added:
    I plan to use the real-time global illumination system to generate the reflection data, instead of using environment probes. This will provide more realistic lighting that responds dynamically to changes in the environment. Thanks again to the devs who showed me this, along with the implementation they were working on.
    Here's one final revision:
     
  16. Josh
    With the help of @martyj I was able to test out occlusion culling in the new engine. This was a great chance to revisit an existing feature and see how it can be improved. The first thing I found is that determining visibility based on whether a single pixel is visible isn't necessarily a good idea. If small cracks are present in the scene one single pixel peeking through can cause a lot of unnecessary drawing without improving the visual quality. I changed the occlusion culling more to record the number of pixels drawn, instead just using a yes/no boolean value:
    glBeginQuery(GL_SAMPLES_PASSED, glquery); In OpenGL 4.3, a less accurate but faster GL_ANY_SAMPLES_PASSED_CONSERVATIVE (i.e. it might produce false positives) option was added, but this is a step in the wrong direction, in my opinion.
    Because our new clustered forward renderer uses a depth pre-pass I was able to implement a wireframe rendering more that works with occlusion culling. Depth data is rendered in the prepass, and the a color wireframe is drawn on top. This allowed me to easily view the occlusion culling results and fine-tune the algorithm to make it perfect. Here are the results:
    As you can see, we have pixel-perfect occlusion culling that is completely dynamic and basically zero-cost, because the entire process is performed on the GPU. Awesome!
  17. Josh
    Lighting is nearly complete. and it is ridiculously fast! There are still some visual glitches to work out, mostly with lights intersecting the camera near plane, but it's nearly perfect. I turned the voxel tree back on to see what the speed was, and to check if it was still working, and I saw this image of the level partially voxelized. The direct lighting shader I am using in the rest of the scene will be used to calculate lighting for each voxel on the GPU, and then bounces will be performed to quickly calculate approximate global illumination. This is fun stuff!

  18. Josh
    A map viewer application is now available for beta subscribers. This program will load any Leadwerks map and let you fly around in it, so you can see the performance difference the new renderer makes. I will be curious to hear what kind of results you see with this:
    Program is not tested with all hardware yet, and functionality is limited.
  19. Josh
    I have map loading working now. The LoadMap() function has three overloads you can use::
    shared_ptr<Map> LoadMap(shared_ptr<World> world, const std::string filename); shared_ptr<Map> LoadMap(shared_ptr<World> world, const std::wstring filename); shared_ptr<Map> LoadMap(shared_ptr<World> world, shared_ptr<Stream> stream); Instead of returning a boolean to indicate success or failure, the LoadMap() function returns a Map object. The Map object gives you a handle to hang onto all the loaded entities so they don't get instantly deleted. When you want to clear the map, you can just set this variable to nullptr/NULL:
    auto map = LoadMap(world,"Maps/start.map"); map = nullptr; //BOOM!!! The "entities" member of the map object gives you a list of all entities loaded in the map:
    auto map = LoadMap(world,"Maps/start.map"); for (auto entity : map->entities) { //do something to entity } If you want to clear a map but retain one of the loaded entities, you just set it to a new variable like this. Notice we grab the camera, clear the map, but we still can use the camera:
    auto map = LoadMap(world,"Maps/start.map"); shared_ptr<Camera> cam; for (auto entity : map->entities) { cam = dynamic_pointer_cast<Camera>(entity); if (cam) break; } map = nullptr; //BOOM!!! cam->SetPosition(1,2,3); //everything is fine Materials and shader assignment has gotten simpler. If no material is assigned, a blank one will be auto-generated in the rendering thread. If a material has no shader assigned, the rendering thread will choose one automatically based on what textures are present. For example, if texture slots one and two are filled then the rendering thread will choose a shader with diffuse and normal maps. In most cases, you don't even need to bother assigning a shader to materials. I might even add separate animation and static shader slots, in which case materials could work for animated or non-animated models, and you wouldn't normally even need to specify the shader.
    Shaders now support include directives. By using a pragma statement we can indicate to the engine which file to load in, and the syntax won't trigger an error in Visual Studio Code's syntax highlighter:
    #pragma include Lighting.glsl Shader includes allow us to create many different shaders, while only storing the complicated lighting code in one file that all other shaders include. The #line directive is automatically inserted into the shader source at every line, so that the engine can correctly detect which file and line number any errors originated from.
    With this all working, I can now load maps side by side in Leadwerks 4 and in the new renderer and get actual performance benchmarks. Here's the first one, showing the example map "02-FPS Controller.map" from the First-Person Shooter game template. In Leadwerks 4, with Intel HD 4000 graphics, we get 71 FPS. (Yes, vertical sync is disabled).

    And with the new forward renderer we get a massive 400%+ increase in performance:

    I expect the results will vary a little bit across different hardware, but we can see already that on the low-end hardware the new renderer is a massive improvement.
    I plan to get a new build of the beta up soon so that you can try your own maps out and test the difference. Physics and scripts are presently disabled, as these systems need additional work to be usable.
    Oh, and look how much cleaner those shadow edges are!


  20. Josh
    You can now view detailed sales records of your game assets in Leadwerks Marketplace. First, log into your Leadwerks account and navigate to the Leadwerks Marketplace main page. In the bottom-right, below the categories, a link to your paid files will appear.

    Here you can see a list of all your paid items:

    When you click on an item, you can see a list of people who have purchased it, along with sales dates.

    If you wish to give a free license to any member for any reason, you can do so by clicking the "Generate Purchase" button. A window will pop up where you can type in the member's name and add the item to their account for free.

    These tools give you more control over your game assets and better information on sales.
  21. Josh
    In evaluating possible company names I have come up with the following criteria which I used to choose a name for our new game engine.
    Spelling and Pronunciation
    The name should be unambiguous in spelling. This helps promote word-of-mouth promotion because when someone hears the name for the first time, they can easily find it online. Similarly, the name when read should be unambiguous in pronunciation. This helps the name travel from written to spoken word and back. Can you imagine telling someone else the name of this...establishment...and having them successfully type the name into a web browser?:

    Shorter is Better
    Everything else aside, fewer letters is generally better. Here is a very long company name:

    And here is perhaps the shortest software company name in history. Which do you think is better?

    The Name Should "Pop"
    A good company or product name will use hard consonants like B, T, K, X, and avoid soft sounding letters like S and F. The way a name sounds can actually influence perception of the brand, aside from the name meaning. The name "Elysium", besides being hard to pronounce and spell, is full of soft consonants that sound weak.

    "Blade Runner", on the other hand, starts with a hard B sound and it just sounds good.

    Communicate Meaning
    The name should communicate the nature of the product or company. The name "Uber" doesn't mean anything except "better", which is why the company Uber originally launched as UberCab. Once they got to a certain size it was okay to drop the "cab" suffix, but do you remember the first time you heard of them? You probably thought "what the heck is an Uber?"

    The Leadwerks Brand
    So according to our criteria above, the name Leadwerks satisfies the following conditions:
    The name "pops" and sounds cool. It's not too long. But here's where it falls short:
    Ambiguity in spelling (Leadworks?) Ambiguity in pronunciation. Leadwerks is pronounced like Led Zeppelin, but many people read it as "Leed-works". The name doesn't mean anything, even if it sounds cool. It's just a made-up word. These are the reasons I started thinking about naming the new engine something different.
    New Engine, New Name
    So with this in mind, I set out to find a new name for the new coming engine. I was stumped until I realized that there are only so many words in the English language, and any good name you come up will invariably have been used previously in some other context, hopefully in another industry or product type. Realizing this gave me more leeway, as I did not have to come up with something completely unique the world has never heard before.
    Our early benchmarks indicate the new engine is a performance monster, with incredible results I did not even dream were possible. Together with the rapid development pipeline of Leadwerks, I knew I wanted to focus on speed. Finally, there was one name I kept coming back to for weeks on end. I was able to obtain a suitable domain name. I am now filing a trademark for use of this name, which requires that I begin using it commercially, which is why I am now revealing the name for the first time...
     
     
     
     
     
     
     
     
     
     
     
     
     
     
    Keep scrolling.
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

    How does this name stack up?:
    Unambiguous spelling and pronunciation. It's short. The name "pops". It communicates the defining feature of the product. Now think about our goals for the new engine's name. Will people have any trouble remembering this name? Is there any ambiguity about what the product stands for, and the promise it makes? If two developers are at a Meetup group and one of them says "I made this with Turbo" is there any doubt what the promise of this product is, i.e. massive performance?
    The name even works on a subconscious level. Anyone having trouble with their game performance (in other slow engines that aren't Turbo) will naturally wonder how fast it could be running in ours.


    The fact that the name has a positive emotional response for many people and a strong connection to the game industry is a plus.
    Turbo Game Engine is an unambiguous brand name that takes a stand and makes a clear promise of one thing: speed, which is incredibly important in the days of VR and 240 hz screens.
  22. Josh
    By modifying the spotlight cone attenuation equation I created an area light, with shadow.

    And here is a working box light The difference here is the box light uses orthographic projection and doesn't have any fading on the edges, since these are only meant to shine into windows.

    If I scale the box light up and place it up in the sky, it kind of looks like a directional light. And it kind of is, expect a directional light would either use 3-4 different box lights set at radiating distances from the camera position (cascaded shadow maps) or maybe something different. We have a system now that can handle a large number of different lights so I can really arrange a bunch of box lights in any way I want to cover the ground and give good usage of the available texels.

    Here I have created three box lights which are lighting the entire courtyard with good resolution.

    My idea is to create something like the image on the right. It may not look more efficient, but in reality the majority of pixels in cascaded shadow maps are wasted space because the FOV is typically between 70-90 degrees and the stages have to be square. This would also allow the directional light to act more like a point or spot light. Only areas of the scene that move have to be updated instead of drawing the whole scene three extra times every frame. This would also allow the engine to skip areas that don't have any shadow casters in them, like a big empty terrain (when terrain shadows are disabled at least).

    Spot, and area lights are just the same basic formula of a 2D shadowmap rendered from a point in space with some direction. I am trying to make a generic texture coordinate calculation by multiplying the global pixel position by the shadow map projection matrix times the inverse light matrix, but so far everything I have tried is failing. If I can get that working, then the light calculation in the shader will only have two possible light types, one for pointlights which use a cube shadowmap lookup, and another branch for lights that use a 2D shadowmap.
  23. Josh
    I added spotlights to the forward clustered renderer. It's nothing too special, but it does demonstrate multiple light types working within a single pass.

    I've got all the cluster data and the light index list packed into one texture buffer now. GPU data needs to be aligned to 16 bytes because everything is built around vec4 data. Consequently, some of the code that handles this stuff is really complicated. Here's a sample of some of the code that packs all this data into an array.
    for (auto it = occupiedcells.begin(); it != occupiedcells.end(); it++) { pos = it->first; visibilityset->lightgrid[pos.z + pos.y * visibilityset->lightgridsize.x + pos.x * visibilityset->lightgridsize.y * visibilityset->lightgridsize.x] = visibilityset->lightgrid.size() / 4 + 1; Assert((visibilityset->lightgrid.size() % 4) == 0); for (int n = 0; n < 4; ++n) { visibilityset->lightgrid.push_back(it->second.lights[n].size()); } for (int n = 0; n < 4; ++n) { if (!it->second.lights[n].empty()) { visibilityset->lightgrid.insert(visibilityset->lightgrid.end(), it->second.lights[n].begin(), it->second.lights[n].end()); //Add padding to make data aligned to 16 bytes int remainder = 4 - (it->second.lights[n].size() % 4); for (int i = 0; i < remainder; ++i) { visibilityset->lightgrid.push_back(0); } Assert((visibilityset->lightgrid.size() % 4) == 0); } } } And the shader is just as tricky:
    //------------------------------------------------------------------------------------------ // Point Lights //------------------------------------------------------------------------------------------ countlights = lightcount[0]; int lightgroups = countlights / 4; if (lightgroups * 4 < countlights) lightgroups++; int renderedlights = 0; for (n = 0; n < lightgroups; ++n) { lightindices = texelFetch(texture11, lightlistpos + n); for (i = 0; i < 4; ++i) { if (renderedlights == countlights) break; renderedlights++; lightindex = lightindices[n]; ... I plan to add boxlights next. These use orthographic projection (unlike spotlights, which us perspective) and they have a boundary defined by a bounding box, with no edge softening. They have one purpose, and one purpose only. You can place them over windows for indoor scenes, so you can have light coming in a straight line, without using an expensive directional light. (The developer who made the screenshot below used spotlights, which is why the sunlight is spreading out slightly.)

    I am considering doing away with cascaded shadow maps entirely and using an array of box lights that automatically rearrange around the camera, or a combination of static and per-object shadows. I hope to find another breakthrough with the directional lights and do something really special. For some reason I keep thinking about the outdoor scenery in the game RAGE and while I don't think id's M-M-MEGATEXTURES!!! are the answer, CSM seem like an incredibly inefficient way to distribute texels and I hope to come up with something better.

    Other stuff I am considering
    Colored shadows (that are easy to use). Volumetric lights either using a light mesh, similar to the way lights work in the deferred renderer, or maybe a full-screen post-processing effect that traces a ray out per pixel and calculates lighting at each step. Area lights (easy to add, but there are a lot of possibilities to decide on). These might be totally unnecessary if the GI system is able to do this, so I'm not sure. IES lighting profiles. I really want to find a way to render realistic light refraction, but I can't think of any way to do it other than ray-tracing:

    It is possible the voxel GI system might be able to handle something of this nature, but I think the resolution will be pretty low. We'll see.
    So I think what I will do is add the boxlights, shader includes, diffuse and normal maps, bug test everything, make sure map loading works, and then upload a new build so that subscribers can try out their own maps in the beta and see what the speed difference is.
  24. Josh
    Some of you are earning money selling your game assets in Leadwerks Marketplace. This quick article will show you how to request a payout from the store for money you have earned. First, you need to be signed into your Leadwerks account.
    Click the drop-down user menu in the upper right corner of the website header and click on the link that says "Account Balance".

    On the next page you can see your account balance. As long as it is $20 or more you can withdraw the balance into your PayPal account by hitting the "Withdraw Funds" button.

    Now just enter your PayPal email address and press the "Withdraw" button.

    After that the withdrawal will be deducted from your balance and the withdrawal request will show in your account history. Shortly after that you will receive the funds in your PayPal account.

    You can sell your game assets in Leadwerks Marketplace and earn a 70% commission on each transaction.
×
×
  • Create New...