Jump to content

Josh

Staff
  • Posts

    23,339
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    A "roughness" property has been added in the material editor. This is a simple slider with a value from zero to one. (The default material roughness value is 0.5, so all your existing materials won't turn into mirrors when the update comes). Changing the roughness value will have no visible effect unless an environment probe is visible in the scene, although I could modify the lighting shaders to make this control gloss in the specular calculation:
     

     
    Two new commands have also been added, Material::SetRoughness and Material::GetRoughness.
     
    All of the channels in our 4-texture gbuffer are already in use. To avoid allocating an additional texture to store the per-pixel roughness value, I used two flags in the materials flags value to make a simple 2-bit float. This is encoded in the geometry pass as follows:

    if (materialroughness>=0.5) { materialflags += 32; if (materialroughness>=0.75) materialflags += 64; } else { if (materialroughness>=0.25) materialflags += 64; }
     
    The probe shader then decodes it into a "roughness" integer:

    int roughness=1; if ((32 & materialflags)!=0) roughness += 4; if ((64 & materialflags)!=0) roughness += 2;
     
    The reflection cubemap lookup is then performed as follows:

    miplevel = max(int(textureQueryLod(texture5,shadowcoord).y),roughness); vec4 specular = textureLod(texture5,shadowcoord,miplevel) * specularity * lightspecular;
     
    This allows us to pack the roughness value into the gbuffer while avoiding additional texture memory use. We also have one final unused flag in the material flags value we can allocate in the future for additional functionality. Although this only allows four possible roughness values, I think it is a good solution. Just remember that "rough surface" equals "blurry reflection". The image below corresponds to a low roughness value and the reflections are sharp and clear.
     

     
    Notice that the reflections do not appear on the directly lit spheres, due to the "maximum" blend mode described here. While not technically accurate, this is a good way of dealing with the limitations of 8-bit color until ultrabright 10-bit monitors (or more) become the norm.
     
    Here is a shot using a medium roughness value, around 0.5:
     

     
    Finally, a very rough surface gives indistinct reflections that still look great because they correspond to the surrounding environment:
     


    Shader Updates
    To solve the problem described here, all shaders have been updated so that gbuffer normals are stored in world space instead of camera space. If you are using the beta branch, you must update your project to get the new shaders or lighting will not appear correctly. 
    Any third party shaders must be updated for this change. Most model shaders will have a section of code in the vertex program that looks something like this:

    mat3 nmat = mat3(camerainversematrix[0].xyz,camerainversematrix[1].xyz,camerainversematrix[2].xyz); nmat *= mat3(entitymatrix[0].xyz,entitymatrix[1].xyz,entitymatrix[2].xyz);
     
    The camera matrix multiplication can be removed and the code simplified to that below:

    mat3 nmat = mat3(entitymatrix);
     
    Some post-processing shaders retrieve the pixel normal with a piece of code like this in the fragment program:

    vec3 normal = normalize(normaldata.xyz*2.0-1.0);
     
    To update these shaders, first declare a new uniform before the main function:

    uniform mat3 camerainversenormalmatrix;
     
    And multiply the world normal by this to get the screen normal:

    vec3 normal = camerainversenormalmatrix * normalize(normaldata.xyz*2.0-1.0);

    Leadwerks Game Engine 4.1
    The new global illumination feature will be released in Leadwerks Game Engine 4.1. For comparison you can see a screenshot below of the Leadwerks 4.0 renderer, which looks good: 

     
    But the Leadwerks 4.1 render of the same scene looks absolutely fantastic:
     

     
    Until now, directly illuminated surfaces in Leadwerks looked the best, and now shaded areas look equally beautiful, or even better. Remaining tasks include testing on AMD and Intel hardware, which I have not done yet. I also have to do more work to selectively render objects in the GI reflection render. Objects that cast dynamic shadows are presently skipped, but I need to actually re-render all shadows so their shadows aren't visible in the reflection. I also need to remove entities like particle emitters and adjust quality settings so the GI render isn't rendering reflective water and other unnecessary things. Finally, the probe shader needs more work so it can handle rotation of the probe entity, which it presently does not do.
  2. Josh
    At 100% scaling this image appears correctly:

     
    At 200% scaling it falls apart. The line points are in fact scaled correctly, but they are not surrounding the shape as intended:

     
    So I think instead of converting the coordinate system back and forth between scaled and non-scaled coordinates, the creation function needs to multiply the coordinates by the scaling factor. That means if you create a 70x30 pixel widget and the GUI is using a 200% scaling factor, it will actually create a 140x60 pixel widget instead. However, the little issues like what is pictured above will go away.
     
    This sucks though because if you do this, you will get wrong results:

    gui:SetScale(2) local widgetA = Widget:Create(0,0,200,20,gui) local widgetB = Widget:Create(0,0,200, widgetA:GetPosition().y + widgetA:GetHeight(),gui )
     
    widgetB would be created at a Y position of 80 (20 * 2 * 2)
     
    I fear whatever I implement will simply get ignored by script programmers and they will never test against different DPI scales.
  3. Josh
    I've built a modified version of ReepBlue's C++ animation manager class (based off my Lua script) into the engine. This adds a new command and eliminates the need for the animation manager script.
     

    void Entity::PlayAnimation(const std::string& sequence, const float speed=1.0f, const int blendtime=500, const int mode=0, const std::string endhook="") void Entity::PlayAnimation(const int index, const float speed = 1.0f, const int blendtime = 500, const int mode = 0, const std::string endhook = "")
     
    The animation manager only updates when the entity is drawn, so this does away with the most common use of the entity draw hook. This will make the engine a bit simpler to use, and existing code will still work as before.

    Usage in Lua
    Although the animation manager script has been removed from the default scripts, you do not need to modify your existing projects. 
    The AI, weapon, and animation scripts have been updated to use the built-in animation manager. Previously these scripts would create an AnimationManager lua table and call it like this:

    self.animationmanager:SetAnimationSequence("Idle",0.02)
     
    These calls now look like this:

    self.entity:PlayAnimation("Idle",0.02)
     
    One-shot animations that call a callback when the end of the sequence is reached used to look like this:

    self.animationmanager:SetAnimationSequence("Death",0.04,300,1,self,self.EndDeath)
     
    Now we just pass the function name in:

    self.entity:PlayAnimation("Death",0.04,300,1,"EndDeath"
     
    Again, as long as you have the old AnimationManager.lua script in your project, your existing scripts do not need to be updated.
     
    Animation callbacks are not yet supported in C++.
     
    This update will be available later today, for C++ and Lua, on all platforms.
  4. Josh
    Here are some concepts I came up with for the site redesign.
     

     

     

     
    The bold no-bull**** interface of itch.io inspired this design:
     

     
    I think what will work best is if the designer takes my rough sketches, turns it into a clean design, and implements it with clean code.
     
    I don't think we can change the whole site over at once without me losing control of the creative process and having runaway costs. I want to focus on the pages I have shown here and establish a foundation I can experiment with and iterate on.
  5. Josh
    An update is up which saves all menu settings into your game's config file.  When your program calls System:SetProperty() the inputted key-value pair is saved in a list of settings.  Your game automatically saves these settings to a file when it closes, located in C:\Users\<USERNAME>\AppData\local\<GAMENAME>\<GAMENAME>.cfg.
    The contents of the config file will look something like this:
    anisotropicfilter=8 antialias=1 lightquality=1 screenheight=720 screenwidth=1280 session_number=2 terrainquality=1 texturedetail=0 trilinearfilter=1 verticalsync=1 waterquality=1 When your game runs again, these settings will be automatically loaded and applied.  You can override config settings with a command line argument.  However, command lines arguments will not be saved in the config file.
    This has been my plan for a long time, and is the reason why your game is not set to use the editor settings.  Setting for running your game in real-time should be separate from editor settings.
  6. Josh
    Three years ago I realized we could safely distribute Lua script-based games on Steam Workshop without the need for a binary executable.  At the time this was quite extraordinary.
    http://www.develop-online.net/news/update-leadwerks-workshop-suggests-devs-can-circumvent-greenlight-and-publish-games-straight-to-steam/0194370
    Leadwerks Game Launcher was born.  My idea was that we could get increased exposure for your games by putting free demos and works in progress on Steam.  At the same time, I thought gamers would enjoy being able to try free indie games without the possibility of getting viruses.  Since then there have been some changes in the market.
    Anyone can publish a game to Steam for $100. Services like itch.io and GameJolt have become very popular, despite the dangers of malware. Most importantly, the numbers we see on the Game Launcher just aren't very high.  My own little game Asteroids3D is set up so the user automatically subscribes to it when the launcher starts.  Since March 2015 it has only gained 12,000 subscribers, and numbers of players for other games are much lower.  On the other hand, a simple game that was hosted on our own website a few years back called "Furious Frank" got 22,000 downloads.  That number could be much higher today if we had left it up.
    So it appears that Steam is good for selling products, but it is a lousy way to distribute free games.  In fact, I regularly sell more copies of Leadwerks Game Engine than I can give away free copies of Leadwerks Game Launcher.
    This isn't to say Game Launcher was a failure.  In many cases, developers reported getting download counts as high or higher than IndieDB, GameJolt, and itch.io.  This shows that the Leadwerks brand can be used to drive traffic to your games.
    On a technical level, the stability of Leadwerks Game Engine 4 means that I have been able to upgrade the executable and for the most part games seamlessly work with newer versions of the engine.  However, there are occasional problems and it is a shame to see a good game stop working.  The Game Launcher UI could stand to see some improvement, but I'm not sure it's worth putting a lot of effort into it when the number of installs is relatively low.
    Of course not all Leadwerks games are written in Lua.  Evayr has some amazing free C++ games he created, and we have several commercial products that are live right now, but our website isn't doing much to promote them.  Focusing on distribution through the Game Launcher left out some important titles and split the community.
    Finally, technological advancements have been made that make it easier for me to host large amounts of data on our site.  We are now hooking into Amazon S3 for user-uploaded file storage.  My bill last month was less than $4.00.
    A New Direction
    It is for these reasons I have decided to focus on refreshing our games database and hosting games on our own website.  You can see my work in progress here.
    https://www.leadwerks.com/games
    The system is being redesigned with some obvious inspiration from itch.io and the following values in mind:
    First and foremost, it needs to look good. Highly customizable game page. Clear call to action. There are two possible reasons to post your game on our site.  Either you want to drive traffic to your website or store page, or you want to get more downloads of your game.  Therefore each page has very prominent buttons on the top right to do exactly this.
    Each game page is skinnable with many options.  The default appearance is sleek and dark.

    You can get pretty fancy with your customizations.

    Next Steps
    The templates still need a lot of work, but it is 80% done.  You can begin playing around with the options and editing your page to your liking.  Comments are not shown on the page yet, as the default skin has to be overridden to match your page style, but they will be.
    You can also post your Game Launcher games here by following these steps:
    Find your game's file ID in the workshop.  For example if the URL is "http://steamcommunity.com/sharedfiles/filedetails/?id=405800821" then the file ID is "405800821". Subscribe to your item, start Steam, and navigate to the folder where Game Launcher Workshop items are stored:
    C:\Program Files (x86)\Steam\steamapps\workshop\content\355500 If your item is downloaded there will be a subfolder with the file ID:
    C:\Program Files (x86)\Steam\steamapps\workshop\content\355500\405800821 Copy whatever file is found in that folder into a new folder on your desktop.  The file might be named "data.zip" or it could be named something like "713031292550146077_legacy.bin".  Rename the file "data.zip" if it is. Copy the game launcher game files located here into the same folder on your desktop:
    C:\Program Files (x86)\Steam\steamapps\common\Leadwerks Game Launcher\Game When you double-click "game.exe" (or just "game" on Linux) your game should now run.  Rename the executable to your game's name, including the Linux executable if you want to support Linux. Now zip up the entire contents of that folder and upload it on the site here. You can also select older versions of Game Launcher in the Steam app properties if you want to distribute your game with an older executable.
    Save the Games
    There are some really great little games that have resulted from the game tournaments over the years, but unfortunately many of the download links in the database lead to dead links in DropBox and Google Drive accounts.  It is my hope that the community can work together to preserve all these fantastic gems and get them permanently uploaded to our S3 storage system, where they will be saved forever for future players to enjoy.
    If you have an existing game, please take a look at your page and make sure it looks right.
    Make any customizations you want for the page appearance. Clean up formatting errors like double line breaks, missing images, or dead links. Screenshots should go in the screenshot field, videos should go in the video field, and downloads should go in the downloads field. Some of the really old stuff can still be grabbed off our Google drive here.
    I appreciate the community's patience in working with me to try the idea of Game Launcher, but our results clearly indicate that a zip download directly from our website will get the most installs and is easiest for everyone.
  7. Josh
    With the help of @martyj I was able to test out occlusion culling in the new engine. This was a great chance to revisit an existing feature and see how it can be improved. The first thing I found is that determining visibility based on whether a single pixel is visible isn't necessarily a good idea. If small cracks are present in the scene one single pixel peeking through can cause a lot of unnecessary drawing without improving the visual quality. I changed the occlusion culling more to record the number of pixels drawn, instead just using a yes/no boolean value:
    glBeginQuery(GL_SAMPLES_PASSED, glquery); In OpenGL 4.3, a less accurate but faster GL_ANY_SAMPLES_PASSED_CONSERVATIVE (i.e. it might produce false positives) option was added, but this is a step in the wrong direction, in my opinion.
    Because our new clustered forward renderer uses a depth pre-pass I was able to implement a wireframe rendering more that works with occlusion culling. Depth data is rendered in the prepass, and the a color wireframe is drawn on top. This allowed me to easily view the occlusion culling results and fine-tune the algorithm to make it perfect. Here are the results:
    As you can see, we have pixel-perfect occlusion culling that is completely dynamic and basically zero-cost, because the entire process is performed on the GPU. Awesome!
  8. Josh
    A new build is available on the beta branch. This changes the model picking system to use a different raycasting implementation under-the-hood. Sphere picking (using a radius) will also now correctly return the first hit triangle. You will also notice much faster loading times when you load up a detailed model in the editor!
    Additional parameters have been added to the Joint::SetSpring command:
    void Joint::SetSpring(const float spring, const float relaxation = 1.0f, const float damper = 0.1f) The classes for P2P networking, lobbies, and voice communication have been added but are not yet documented and may still change.
  9. Josh
    A new update is available on the beta branch on Steam. This adds numerous bug fixes. The Linux build of the editor is compiled with Ubuntu 16.04 and the engine libraries and executables are compiled with Ubuntu 18.04. Linux users, please let me know how this works for you.
  10. Josh
    An update is available on the beta branch on Steam with a few bug fixes.
    I'm going to release 4.6 with the current features because a lot of bugs have been fixed since 4.5 and we're overdue for an official release. 4.7 will add a new vehicle system, character crouching physics, and some other things, and will be out later this year.
  11. Josh
    The latest design of my OpenGL renderer using bindless textures has some problems, and although these can be resolved, I think I have hit the limit on how useful an initial OpenGL implementation will be for the new engine. I decided it was time to dive into the Vulkan API. This is sort of scary, because I feel like it sets me back quite a lot, but at the same time the work I do with this will carry forward much better. A Vulkan-based renderer can run on Windows, Linux, Mac, iOS, Android, PS4, and Nintendo Switch.
    So far my impressions of the API are pretty good. Although it is very verbose, it gives you a lot of control over things that were previously undefined or vendor-specific hacks. Below is code that initializes Vulkan and chooses a rendering device, with a preference for discrete GPUs over integrated graphics.
    VkInstance inst; VkResult res; VkDevice device; VkApplicationInfo appInfo = {}; appInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO; appInfo.pApplicationName = "MyGame"; appInfo.applicationVersion = VK_MAKE_VERSION(1, 0, 0); appInfo.pEngineName = "TurboEngine"; appInfo.engineVersion = VK_MAKE_VERSION(1, 0, 0); appInfo.apiVersion = VK_API_VERSION_1_0; // Get extensions uint32_t extensionCount = 0; vkEnumerateInstanceExtensionProperties(nullptr, &extensionCount, nullptr); std::vector<VkExtensionProperties> availableExtensions(extensionCount); vkEnumerateInstanceExtensionProperties(nullptr, &extensionCount, availableExtensions.data()); std::vector<const char*> extensions; VkInstanceCreateInfo createInfo = {}; createInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO; createInfo.pApplicationInfo = &appInfo; createInfo.enabledExtensionCount = (uint32_t)extensions.size(); createInfo.ppEnabledExtensionNames = extensions.data(); #ifdef DEBUG createInfo.enabledLayerCount = 1; const char* DEBUG_LAYER = "VK_LAYER_LUNARG_standard_validation"; createInfo.ppEnabledLayerNames = &DEBUG_LAYER; #endif res = vkCreateInstance(&createInfo, NULL, &inst); if (res == VK_ERROR_INCOMPATIBLE_DRIVER) { std::cout << "cannot find a compatible Vulkan ICD\n"; exit(-1); } else if (res) { std::cout << "unknown error\n"; exit(-1); } //Enumerate devices uint32_t gpu_count = 1; std::vector<VkPhysicalDevice> devices; res = vkEnumeratePhysicalDevices(inst, &gpu_count, NULL); if (gpu_count > 0) { devices.resize(gpu_count); res = vkEnumeratePhysicalDevices(inst, &gpu_count, &devices[0]); assert(!res && gpu_count >= 1); } //Sort list with discrete GPUs at the beginning std::vector<VkPhysicalDevice> sorteddevices; for (int n = 0; n < devices.size(); n++) { VkPhysicalDeviceProperties deviceprops = VkPhysicalDeviceProperties{}; vkGetPhysicalDeviceProperties(devices[n], &deviceprops); if (deviceprops.deviceType == VkPhysicalDeviceType::VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU) { sorteddevices.insert(sorteddevices.begin(),devices[n]); } else { sorteddevices.push_back(devices[n]); } } devices = sorteddevices; VkDeviceQueueCreateInfo queue_info = {}; unsigned int queue_family_count; for (int n = 0; n < devices.size(); ++n) { vkGetPhysicalDeviceQueueFamilyProperties(devices[n], &queue_family_count, NULL); if (queue_family_count >= 1) { std::vector<VkQueueFamilyProperties> queue_props; queue_props.resize(queue_family_count); vkGetPhysicalDeviceQueueFamilyProperties(devices[n], &queue_family_count, queue_props.data()); if (queue_family_count >= 1) { bool found = false; for (int i = 0; i < queue_family_count; i++) { if (queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT) { queue_info.queueFamilyIndex = i; found = true; break; } } if (!found) continue; float queue_priorities[1] = { 0.0 }; queue_info.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO; queue_info.pNext = NULL; queue_info.queueCount = 1; queue_info.pQueuePriorities = queue_priorities; VkDeviceCreateInfo device_info = {}; device_info.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO; device_info.pNext = NULL; device_info.queueCreateInfoCount = 1; device_info.pQueueCreateInfos = &queue_info; device_info.enabledExtensionCount = 0; device_info.ppEnabledExtensionNames = NULL; device_info.enabledLayerCount = 0; device_info.ppEnabledLayerNames = NULL; device_info.pEnabledFeatures = NULL; res = vkCreateDevice(devices[n], &device_info, NULL, &device); if (res == VK_SUCCESS) { VkPhysicalDeviceProperties deviceprops = VkPhysicalDeviceProperties{}; vkGetPhysicalDeviceProperties(devices[n], &deviceprops); std::cout << deviceprops.deviceName; vkDestroyDevice(device, NULL); break; } } } } vkDestroyInstance(inst, NULL);  
  12. Josh
    It's always fun when I can do something completely new that people have never seen in a game engine. I've had the idea for a while to create a new light type for light strips, and I got to implement this today. The new engine has taken a tremendous amount of effort to get working over two years, but as development continues I think I will become much more responsive to your suggestions since we have a very strong foundation to build on now.
    Using this test scene provided by @reepblue you can see how this new light type looks and behaves. They are great for placing along walls, but what really made me interested was the idea to calculate specular lighting not from a single point, but with a different way. I thought if I could figure out the math I would get a realistic reflection on the ground, and it worked!

    The reflection on the floor is actually the specular component of the light. We are used to thinking of specular reflections as a little white circle that moves around, but the light doesn't have to be coming from a single point. Some calculations in the shader can be used to determine the closest point to the light strip and use that for reflections. The net effect is that a long bar appears on the floor, matching the length of the light. This is not a screen-space effect or a cubemap. When you look down at the floor the specular component is still there shining back at you. Every surface is using the same exact equation, but it appears very different on the walls, the ceiling, and the floor due to the different angles.

    Even a surface facing opposite the light will correctly reflect it back to the camera.

    In this image, I created a small green strip light that looks like a laser. There is no visible laser beam, but if there was it would appear above the soft green lighting. The hard line on the ground is actually the specular reflection of the light. You can see it reflecting off the sphere as well.

    The new Vulkan renderer also supports box lights, which are a directional light with a defined boundary, and I have an idea for one more type of light.
  13. Josh
    The beta of our new game engine has been updated with a new renderer built with the Vulkan graphics API, and all OpenGL code has been removed. Vulkan provides us with low-overhead rendering that delivers a massive increase in rendering performance. Early benchmarks indicate as much as a 10x improvement in speed over the Leadwerks 4 renderer.
    The new engine features an streamlined API with modern C++ features and an improved binding library for Lua. Here's a simple C++ program in Turbo:
    #include "Turbo.h" using namespace Turbo; int main(int argc, const char *argv[]) { //Create a window auto window = CreateWindow("MyGame", 0, 0, 1280, 720); //Create a rendering context auto context = CreateContext(window); //Set some Lua variables VirtualMachine::lua->set("mainwindow", window); VirtualMachine::lua->set("maincontext", context); //Create the world auto world = CreateWorld(); //Load a scene auto scene = LoadScene(world, "Maps/start.map"); while (window->KeyHit(KEY_ESCAPE) == false and window->Closed() == false) { world->Update(); world->Render(context); } return 0; } Early adopters can get access to beta builds with a subscription of just $4.99 a month, which can be canceled at any time. New updates will come more frequently now that the basic renderer is working.
  14. Josh
    Previously I described how I was able to save the voxel data into a sparse octree and correctly lookup the right voxel in a shader. This shot shows that each triangle is being rasterized separately, i.e. the triangle bounding box is being correctly trimmed to avoid a lot of overlapping voxels:

    Calculating direct lighting using the sparse octree was very difficult, and took me several days of debugging. I'm not 100% sure what the problem was, other than it seems GLSL code is not quite as flexible as C++. I actually had the same exact function working in GLSL and C++, and it worked perfectly in C++ but gave wrong results in GLSL! Of course I did not have a debugger for my GLSL code, so I ended up having to write a lot of if statements and outputting a pixel color base on the result. In the end I finally tracked the problem down to some data stored in an array, changed the way the routine worked, but what the exact issue was I'll never know.
    With the sparse voxel octree, we only have about 400,000 pixels to draw when we process direct lighting. Rendering all voxels in a 256x256x256 volume texture would require 16 million pixels to be drawn. So the sparse approach requires us to draw only 2% the number of pixels we would have to otherwise. Using shadow maps, on a 1920x1080 screen we would have to calculate about 2,000,000 shadow intersections. Although we are not comparing the same exact things, this does make me optimistic for the final performance results. Basically, instead of calculating shadow visibility for each pixels, we can just calculate per voxel, and your voxels are always going to be quite a bit bigger than screen pixels. So the whole issue of balancing shadow map resolution with screen resolution goes away.
    Ray traversal is very fast because it skips large chunks of empty space, instead of checking every single grid space for a voxel.
    The voxel resolution below is not very high, I am only using one octree, and there's currently no blending / filtering, but that will all come in time.

    Leadwerks 1 and 3D World Studio used lightmaps for lighting. Later versions of Leadwerks used deferred lighting and shadowmaps. Being able to roll out another cutting-edge lighting technology in Ultra Engine is icing on the cake for the new engine. I expect this will allow particle shadows and transparent glass with colored shadows, as well as real-time global illumination and reflections, all with great performance on most hardware.
  15. Josh
    A new beta update is available. The raytracing implementation has been sped up significantly. The same limitations of the current implementation still apply, but the performance will be around 10x faster, as the most expensive part of the raytrace shader has been precomputed and cached.
    The Material::SetRefraction method has also been exposed to Lua. The Camera::SetRefraction method is now called "SetRefractionMode".
    The results are so good, I don't have any plans to use any kind of screen-space reflection effect.
     
  16. Josh
    Until now, we haven't really had proper debugging info when a crash occurs during execution of a Lua script. Thanks to some previous work TylerH did with Lua, a conversation with him revealed how to easily add debugging info into the editor and script interpreter. (Incidentally, the Lua integration was Tyler's idea to begin with!) Here's a shot of the Script Editor catching an engine crash and displaying the script line the error occurs at. This is a crash that occurred in the engine, not a Lua compile error.

    This will make Lua a more viable alternative to other programming languages, and it will make it easier to use Lua as an extra enhancement to a C++ or other program. The integration of LuaJIT with the engine also speeds script execution up, a lot. Performance tests of the LuaJIT beta 2.0.0 reveal it to be up to 102 times faster than standard interpreted Lua. (Lumooja first told me this, and I thought he was exaggerating, but you can see for yourself.) And regular Lua was already faster than UnrealScript!
     
    In the future, I think we can look forward to having a nice display showing all the variables in your script program in real-time, and improved debugging tools.
  17. Josh
    I fixed our AI navigation problems and got pathfinding to work using navmeshes. Now you can easily make a horde of zombies chase after the player without setting up any waypoints. Which is the whole point of this, of course.
     
    The problem had to do with a polygon filter, and I am still not sure what is going on, so I disabled it for now.
     


  18. Josh
    I'm testing the Leadwerks3D AI navigation and getting some interesting results. I'm not sure why it's acting this way, but we'll get it figured out soon. It seems like the navigation gets "stuck" on corners, fails to enter some tiles, and likes to take the scenic route to some destinations. B)
     


  19. Josh
    All difficult technical challenges for the completion of Leadwerks3D are solved. This includes navmesh pathfinding, cross-platform support, Lua and C# integration, OpenGLES rendering, the abstract driver model, etc., etc., etc. Basically, all the scary stuff is done, and the only thing that remains is hard work. I'll be turning my attention back to the editor shortly, but first I wanted to address a different kind of challenge: Documentation and the website.
     
    The present appearance of the website took a long time to develop, and is the result of four or five different people's work. It was difficult to find him, but I finally came across the one who is the master of the forum software and CMS we use. He was able to fix a few small issues I had, but he did not design the site. Now he has bee recruited to create a new website theme using the good elements of our current design, in a Web 2.0-ish style. We're also planning on an improved image and video gallery, and a better display for community articles. Professional web design services will be used to create product pages for Leadwerks3D that truly reflect how awesome the software is. I'm not a web designer, and I am happily surrendering that responsibility to someone who has instructions to develop Web 2,0-style product pages with my content.
     
    Last summer we launched a lot of new website features including a chat bar, video gallery, and embedded documentation. The first two were a success that I feel really add to the site experience. The third I consider somewhat of a failure. The documentation search is not very good, the pages take too long to load, and the organization is too categorical. I installed a temporary Wiki where I have been jotting down docs and ideas, but I wasn't committed to the idea of using it for the Leadwerks3D documentation. Then I found the documentation system we're going to use.
     
    Leadwerks3D documentation will be available in a two-panel searchable HTML page, which is pretty standard. However, the same docs can also be exported in PDF and even EPub format, which is what iBooks uses:

     
    So, with the documentation system decided and web design out of my hands, I now turn back to the Leadwerks3D editor...
  20. Josh
    I've never coded much polygonal modeling routines, instead focusing on constructive solid geometry. I wanted to include some tools for modifying surface normals and texture coordinates. I came up with a normal calculation routine that actually uses four different algorithms, depending on the settings specified.
     
    One thing I learned right away is you want to do away with n*n routines. That is, NEVER do this:

    for (i=0; i<surface->CountVertices(); i++) { for (n=0; n<surface->CountVertices(); n++) { //Write some code here } }
    Instead, I used an std::map with a custom compare function. The comparison function below, when used together with an std::map, allows the engine to quickly find a vertex at any position, within a given tolerance:

    bool SurfaceReference::UpdateNormalsCompare(Vec3 v0, Vec3 v1) { if (v0.DistanceToPoint(v1)<UpdateNormalsLinearTolerance) return false; return v0<v1; }
    At first I was experiencing some weird results where some vertices seemed to be ignored:

     
    I realized the problem was that my map, which used Vec3 objects for the key, were not sorting properly. Here was my original Vec3 compare function:

    bool Vec3::operator<(const Vec3 v) { if (x<v.x) return true; if (y<v.y) return true; if (z<v.z) return true; return false; }
    The above function is supposed to result in any set of Vec3 objects being sorted in order. Can you see what's wrong with it? It's supposed to first sort Vec3s by the X component, then the Y, then the Z. Consider the following set of Vec3s:
    A = Vec3(1,2,3)
    B = Vec3(2,4,3)
    C = Vec3(2,1,1)
     
    When sorted, these three Vec3s should be in the following order:
    A,C,B
     
    If you look carefully at the compare function above, it doesn't give consistent results. For example, A would be less than C, but C would also be less than A.
     
    Here's the correct compare function. Notice I added a second logical operation for each element:

    bool Vec3::operator<(const Vec3 v) { if (x<v.x) return true; if (x>v.x) return false; if (y<v.y) return true; if (y>v.y) return false; if (z<v.z) return true; return false; }
    So with that issue sorted out, the resulting code using std::maps is much, MUCH faster, although it can get pretty difficult to visualize. I think I am a hardcore C++ coder now!:

    void SurfaceReference::Optimize(const float& tolerance) { int i,a,b,c,v; Vertex vertex; bool(*fn_pt)(Vertex,Vertex) = OptimizeCompare; std::map<Vertex,std::vector<Vertex>,bool(*)(Vertex,Vertex)> vertexmap (fn_pt); std::map<Vertex,std::vector<Vertex>,bool(*)(Vertex,Vertex)>::iterator it; int vertexcount = 0; std::vector<Vertex> vertexarray; Vec3 normal; OptimizeTolerance = tolerance; //Divide the surface up into clusters and remap polygon indices for (i=0; i<CountIndices(); i++) { v = GetIndiceVertex(i); vertex = Vertex(GetVertexPosition(v),GetVertexNormal(v),GetVertexTexCoords(v,0),GetVertexTexCoords(v,1),GetVertexColor(v)); if (vertexmap.find(vertex)==vertexmap.end()) { vertex.index = vertexcount; vertexcount++; } vertexmap[vertex].push_back(vertex); SetIndiceVertex(i,vertexmap[vertex][0].index); } //Resize vector to number of vertices vertexarray.resize(vertexcount); //Average all vertices within each cluster for (it=vertexmap.begin(); it!=vertexmap.end(); it++) { std::vector<Vertex> vector = (*it).second; //Reset vertex to zero vertex.position = Vec3(0); vertex.normal = Vec3(0); vertex.texcoords[0] = Vec2(0); vertex.texcoords[1] = Vec2(0); vertex.color = Vec4(0); //Get the average vertex for (i=0; i<vector.size(); i++) { vertex.position += vector[i].position; vertex.normal += vector[i].normal; vertex.texcoords[0].x += vector[i].texcoords[0].x; vertex.texcoords[0].y += vector[i].texcoords[0].y; vertex.texcoords[1].x += vector[i].texcoords[1].x; vertex.texcoords[1].y += vector[i].texcoords[1].y; vertex.color += vector[i].color; } vertex.position /= vector.size(); vertex.normal /= vector.size(); vertex.texcoords[0].x /= vector.size(); vertex.texcoords[1].x /= vector.size(); vertex.texcoords[0].y /= vector.size(); vertex.texcoords[1].y /= vector.size(); vertex.color /= vector.size(); //Add to vector vertexarray[vector[0].index] = vertex; } //Clear vertex arrays delete positionarray; delete normalarray; delete texcoordsarray[0]; delete texcoordsarray[1]; delete colorarray; delete binormalarray; delete tangentarray; positionarray = NULL; normalarray = NULL; texcoordsarray[0] = NULL; texcoordsarray[1] = NULL; colorarray = NULL; binormalarray = NULL; tangentarray = NULL; //Add new vertices into surface for (i=0; i<vertexarray.size(); i++) { vertex = vertexarray[i]; AddVertex( vertex.position.x, vertex.position.y, vertex.position.z, vertex.normal.x, vertex.normal.y, vertex.normal.z, vertex.texcoords[0].x, vertex.texcoords[0].y, vertex.texcoords[1].x, vertex.texcoords[1].y, vertex.color.x, vertex.color.y, vertex.color.z, vertex.color.w ); } UpdateTangentsAndBinormals(); }
    Below, you can see what happens when you use the angular threshhold method, with angular tolerance set to zero:

     
    And here it is with a more reasonable tolerance of 30 degrees:

     
    You can calculate texture coordinates for a model using box, plane, cylinder, and sphere texture mapping. You can also do a pure matrix transformation on the texcoords. The editor automatically calculates the bounds of the object and uses those by default, but you can translate, scale, and rotate the texture mapping shape to adjust the coordinates. Box and plane mapping were easy to figure out. Sphere and cylinder mapping were more difficult to visualize. I first cracked cylinder mapping when I realized the x component of the normalized vertex position could be used for the U texture coordinate, and then sphere mapping was just like that for both X/U and Y/V:

     
    Box mapping is good for mechanical stuff and buildings, but bad for organic shapes, as you can see from the visible seam that is created here. Good thing we have four more modes to choose from!:

     
    You also get lots of powerful commands in the surface class. Here's a little taste of the header file:

    virtual void Optimize(const float& tolerance=0.01); virtual void UpdateTexCoords(const int& mode, const Mat4& mat=Mat4(), const float& tilex=1, const float& tiley=1, const int& texcoordset=0); virtual void Transform(const Mat4& mat); virtual void Unweld(); virtual void Facet(); virtual void UpdateNormals(const int& mode, const float& distancetolerance=0.01, const float& angulartolerance=180.0);
    To conclude, here's some other random and funny images I came up with while developing these features. I think they are beautiful in their own flawed way:

     

  21. Josh
    One thing I love about constructive solid geometry modeling is that texture mapping is sooooo much simpler than 3ds Max. Most of the time the automatic texture mapping works fine, and when you do need to adjust texture mapping by hand, CSG texture mapping tools are still much easier. The justify buttons line a texture up along a face, or a group of faces.
     
    Although using these tools is fun and easy, programming them is another matter. I dreaded the implementation of the texture justify buttons, but it wasn't that hard. It doesn't account for rotation and scale yet, but a basic implementation turned out to be surprisingly easy. I guess after writing 3D World Studio five times, I am starting to get the hang of this:

     
    Smooth groups are working as well. Leadwerks3D will be the first CSG editor ever (AFAIK) to support real-time smooth groups. In the example below, we have a beveled corner we want to make rounded:

     
    To round the corner, select all the faces you want smoothed together, and press one of the smoothing group buttons. This will assign the selected faces to use the smoothing group you press. You can add multiple smooth groups to a face. Faces that have one or more smooth groups in common will be smoothed together:

     
    Here is the resulting rounded corner:

     
    Finally, I am starting to narrow down the visual theme for the new editor. Since we use the standard Windows and Mac interfaces on each operating system, it makes sense to use the standard Windows Office/VS icons whenever possible. One side benefit is they also look great on Mac, strangely enough:

     

     

     

     

    That's all for now. I've had a lot of fun designing the "perfect" workflow for game development, and can't wait to show you all the finished product this summer!
  22. Josh
    I picked up an Intel SSD for fairly cheap and installed the Windows 8 Release Preview on it. With this configuration boots up in about 5 seconds. I actually spend a lot longer waiting for BIOS to finish than for Windows to start, which is fantastic.
     
    Anyways, it makes sense to me to take screenshots for the docs from the newest version of Windows 8 since Windows is still the dominant operating system. I couldn't find a driver for my ATI 3850, so I swapped it out for a GEForce 480. Here's Leadwerks 3 running in Windows 8. It runs perfectly and doesn't look too bad.

     

     

  23. Josh
    I wanted to add some default procedural generation tools in the Leadwerks 3.1 terrain editor. The goal is to let the user input a few parameters to control the appearance of their terrain and auto-generate a landscape that looks good without requiring a lot of touch-up work.
    Programmers commonly rely on two methods for terrain heightmap generation, Perlin noise and fractal noise. Perlin noise produces a soft rolling appearance. The problem is that Perlin noise heightmaps look nothing like real-life terrain:

     
    Fractal noise provides a better appearance, but it still looks "stylized" instead of realistic:

     
    To get realistic procedural terrains, a more complex algorithm was needed. After a few days of experimentation, I found the optimal sequence of filters to combine to get realistic results.
    We start with a Voronoi diagram. The math here is tricky, but we end up with a grid of geometric primitives that meet at the edges. This gives is large rough features and ridge lines that look approximately like real mountains:

     
    Of course, real mountains do not have perfectly straight edges. A perturbation filter is added to make the edges a little bit "wavy", like an underwater effect. It gets rid of the perfectly straight edges without losing the defining features of the height map:

     
    The next step is to add some low-frequency Perlin noise. This gives the entire landscape some large hills that add variation to the height, instead of just having a field of perfectly shaped mountains. The mixture of this filter can be used to control how hilly or mountainous the terrain appears:

     
    We next blend in some Fractal noise, to roughen the landscape up a bit and add some high frequency details:

     
    Finally, we use thermal and hydraulic erosion to add realistic weathering of our terrain. Thermal erosion works by reducing the harshness of steep cliffs, and letting material fall down and settle. Hydraulic erosion simulates thousands of raindrops falling on the landscape and carrying material away. This gives beautiful rivulets that appear as finger-life projections in the height map: Rather than relying on conventional hydraulic erosion algorithms, I created my own technique designed specifically to bring out the appearance of those features.

     
    Here is an animation of the entire process:

     
    And in the renderer, the results look like the image below. All the parameters can be adjusted to vary the appearance, and then you can go in with the manual tools and sculpt the terrain as desired.

     
    The new landscape has ridges, mountains, and realistic erosion. Compare this to the Perlin and fractal landscapes at the top of this article. It's also interesting that the right combination of roughness and sharp features gives a much better appearance to the texture blending algorithm.
×
×
  • Create New...