Jump to content

Josh

Staff
  • Posts

    23,590
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh

    Articles
    The Ultra Engine editor is designed to be expandable and modifiable.  Lua script is integrated into the editor and can be used to write editor extensions and even modify the scene or the editor itself in real-time. 
    We can create a scene object entirely in code and make it appear in the scene browser tree:
    box = CreateBox(editor.world) box.name = "box01" o = CreateSceneObject(box) --make editor recognize the entity and add it to the scene browser o:SetSelected(true)
    We can even modify the editor itself and start adding new features to the interface:
    editor.sidepanel.tabber:AddItem("Roads") p = CreatePanel(0,0,300,500,editor.sidepanel.tabber) button = CreateButton("Create",20,20,100,30,p)
    Of course, you would usually want to put all this code in a script file and either run the script by selecting the Script > Run Script... menu item, or placing the script in the "Scripts/Start" folder so it automatically gets run at startup. But it sure is cool to be able to experiment live with Lua right in the console and see the result of your code instantly.
  2. Josh

    Articles
    The VK_KHR_dynamic_rendering extension has made its way into Vulkan 1.2.203 and I have implemented this in Ultra Engine. What does it do?
    Instead of creating renderpass objects ahead of time, dynamic rendering allows you to just specify the settings you need as your are performing filling in command buffers with rendering instructions. From the Khronos working group:
    In my experience, post-processing effects is where this hurt the most. The engine has a user-defined stack of post-processing effects, so there are many configurations possible. You had to store and cache a lot of renderpass objects for all possible combinations of settings. It's not impossible but it made things very very complicated. Basically, you have to know every little detail of how the renderpass object is going to be used in advance. I had several different functions like the code below, for initialing renderpasses that were meant to be used at various points in the rendering routine.
    bool RenderPass::InitializePostProcess(shared_ptr<GPUDevice> device, const VkFormat depthformat, const int colorComponents, const bool lastpass) { this->clearmode = clearmode; VkFormat colorformat = __FramebufferColorFormat; this->colorcomponents = colorComponents; if (depthformat != 0) this->depthcomponent = true; this->device = device; std::array< VkSubpassDependency, 2> dependencies; dependencies[0] = {}; dependencies[0].srcSubpass = VK_SUBPASS_EXTERNAL; dependencies[0].dstSubpass = 0; dependencies[0].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT; dependencies[0].srcAccessMask = 0; dependencies[0].dstStageMask = VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT; dependencies[0].dstAccessMask = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT | VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT; dependencies[1] = {}; dependencies[1].srcSubpass = VK_SUBPASS_EXTERNAL; dependencies[1].dstSubpass = 0; dependencies[1].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT; dependencies[1].srcAccessMask = 0; dependencies[1].dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT; dependencies[1].dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT; renderPassInfo = {}; renderPassInfo.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO; renderPassInfo.attachmentCount = colorComponents; renderPassInfo.dependencyCount = colorComponents; if (depthformat == VK_FORMAT_UNDEFINED) { dependencies[0] = dependencies[1]; } else { renderPassInfo.attachmentCount++; renderPassInfo.dependencyCount++; } renderPassInfo.pDependencies = dependencies.data(); colorAttachment[0] = {}; colorAttachment[0].format = colorformat; colorAttachment[0].samples = VK_SAMPLE_COUNT_1_BIT; colorAttachment[0].initialLayout = VK_IMAGE_LAYOUT_UNDEFINED; colorAttachment[0].loadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE; colorAttachment[0].storeOp = VK_ATTACHMENT_STORE_OP_STORE; colorAttachment[0].stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE; colorAttachment[0].stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE; colorAttachment[0].finalLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL; if (lastpass) colorAttachment[0].finalLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR; VkAttachmentReference colorAttachmentRef = {}; colorAttachmentRef.attachment = 0; colorAttachmentRef.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL; depthAttachment = {}; VkAttachmentReference depthAttachmentRef = {}; if (depthformat != VK_FORMAT_UNDEFINED) { colorAttachmentRef.attachment = 1; depthAttachment.format = depthformat; depthAttachment.samples = VK_SAMPLE_COUNT_1_BIT; depthAttachment.loadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE; depthAttachment.initialLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;// VK_IMAGE_LAYOUT_UNDEFINED; depthAttachment.storeOp = VK_ATTACHMENT_STORE_OP_STORE; depthAttachment.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE; depthAttachment.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE; depthAttachment.finalLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL; depthAttachmentRef.attachment = 0; depthAttachmentRef.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL; } colorAttachment[0].initialLayout = VK_IMAGE_LAYOUT_UNDEFINED; depthAttachment.initialLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;// VK_IMAGE_LAYOUT_UNDEFINED; subpasses.push_back( {} ); subpasses[0].pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS; subpasses[0].colorAttachmentCount = colorComponents; subpasses[0].pColorAttachments = &colorAttachmentRef; subpasses[0].pDepthStencilAttachment = NULL; if (depthformat != VK_FORMAT_UNDEFINED) subpasses[0].pDepthStencilAttachment = &depthAttachmentRef; VkAttachmentDescription attachments[2] = { colorAttachment[0], depthAttachment }; renderPassInfo.subpassCount = subpasses.size(); renderPassInfo.pAttachments = attachments; renderPassInfo.pSubpasses = subpasses.data(); VkAssert(vkCreateRenderPass(device->device, &renderPassInfo, nullptr, &pass)); return true; } This gives you an idea of just how many render passes I had to create in advance:
    // Initialize Render Passes shadowpass[0] = make_shared<RenderPass>(); shadowpass[0]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 0, true);//, CLEAR_DEPTH, -1); shadowpass[1] = make_shared<RenderPass>(); shadowpass[1]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 0, true, true, true, 0); if (MULTIPASS_CUBEMAP) { cubeshadowpass[0] = make_shared<RenderPass>(); cubeshadowpass[0]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 0, true, true, true, CLEAR_DEPTH, 6); cubeshadowpass[1] = make_shared<RenderPass>(); cubeshadowpass[1]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 0, true, true, true, 0, 6); } //shaderStages[0] = TEMPSHADER->shaderStages[0]; //shaderStages[4] = TEMPSHADER->shaderStages[4]; posteffectspass = make_shared<RenderPass>(); posteffectspass->InitializePostProcess(dynamic_pointer_cast<GPUDevice>(Self()), VK_FORMAT_UNDEFINED, 1, false); raytracingpass = make_shared<RenderPass>(); raytracingpass->InitializeRaytrace(dynamic_pointer_cast<GPUDevice>(Self())); lastposteffectspass = make_shared<RenderPass>(); lastposteffectspass->InitializeLastPostProcess(dynamic_pointer_cast<GPUDevice>(Self()), depthformat, 1, false); lastcameralastposteffectspass = make_shared<RenderPass>(); lastcameralastposteffectspass->InitializeLastPostProcess(dynamic_pointer_cast<GPUDevice>(Self()), depthformat, 1, true); { std::vector<VkFormat> colorformats = { __FramebufferColorFormat ,__FramebufferColorFormat, VK_FORMAT_R8G8B8A8_SNORM, VK_FORMAT_R32_SFLOAT }; for (int earlyZPass = 0; earlyZPass < 2; ++earlyZPass) { for (int clearflags = 0; clearflags < 4; ++clearflags) { renderpass[clearflags][earlyZPass] = make_shared<RenderPass>(); renderpass[clearflags][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 1, false, false, false, clearflags, 1, earlyZPass); renderpassRGBA16[clearflags][earlyZPass] = make_shared<RenderPass>(); renderpassRGBA16[clearflags][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), colorformats, depthformat, 4, false, false, false, clearflags, 1, earlyZPass); firstrenderpass[clearflags][earlyZPass] = make_shared<RenderPass>(); firstrenderpass[clearflags][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 1, false, true, false, clearflags, 1, earlyZPass); lastrenderpass[clearflags][earlyZPass] = make_shared<RenderPass>(); lastrenderpass[clearflags][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 1, false, false, true, clearflags, 1, earlyZPass); //for (int d = 0; d < 2; ++d) { for (int n = 0; n < 5; ++n) { if (n == 2 or n == 3) continue; rendertotexturepass[clearflags][n][earlyZPass] = make_shared<RenderPass>(); rendertotexturepass[clearflags][n][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), colorformats, depthformat, n, true, false, false, clearflags, 1, earlyZPass); firstrendertotexturepass[clearflags][n][earlyZPass] = make_shared<RenderPass>(); firstrendertotexturepass[clearflags][n][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), colorformats, depthformat, n, true, true, false, clearflags, 1, earlyZPass); // lastrendertotexturepass[clearflags][n] = make_shared<RenderPass>(); // lastrendertotexturepass[clearflags][n]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), depthformat, n, true, false, true, clearflags); } } } } } With dynamic rendering, you still have to fill in most of the same information, but you can just do it based on whatever the current state of things is, instead of looking for an object that hopefully matches the exact settings you want:
    VkRenderingInfoKHR renderinfo = {}; renderinfo.sType = VK_STRUCTURE_TYPE_RENDERING_INFO_KHR; renderinfo.renderArea = scissor; renderinfo.layerCount = 1; renderinfo.viewMask = 0; renderinfo.colorAttachmentCount = 1; targetbuffer->colorAttachmentInfo[0].imageLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL; targetbuffer->colorAttachmentInfo[0].clearValue.color.float32[0] = 0.0f; targetbuffer->colorAttachmentInfo[0].clearValue.color.float32[1] = 0.0f; targetbuffer->colorAttachmentInfo[0].clearValue.color.float32[2] = 0.0f; targetbuffer->colorAttachmentInfo[0].clearValue.color.float32[3] = 0.0f; targetbuffer->colorAttachmentInfo[0].imageView = targetbuffer->imageviews[0]; renderinfo.pColorAttachments = targetbuffer->colorAttachmentInfo.data(); targetbuffer->depthAttachmentInfo.clearValue.depthStencil.depth = 1.0f; targetbuffer->depthAttachmentInfo.clearValue.depthStencil.stencil = 0; targetbuffer->depthAttachmentInfo.imageLayout = VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL; renderinfo.pDepthAttachment = &targetbuffer->depthAttachmentInfo; device->vkCmdBeginRenderingKHR(cb->commandbuffer, &renderinfo); Then there is the way render passes effect the image layout state. With the TransitionImageLayout command, it is fairly easy to track the current state of the image layout, but render passes automatically switch the image layout after completion to a predefined state. Again, not impossible to handle, in and of itself, but when you add these things into the complexity of designing a full engine, things start to get ugly.
    void GPUCommandBuffer::EndRenderPass() { vkCmdEndRenderPass(commandbuffer); for (int k = 0; k < currentrenderpass->layers; ++k) { for (int n = 0; n < currentrenderpass->colorcomponents; ++n) { if (currentdrawbuffer->colortexture[n]) currentdrawbuffer->colortexture[n]->imagelayout[0][currentdrawbuffer->baseface + k] = currentrenderpass->colorAttachment[n].finalLayout; } if (currentdrawbuffer->depthtexture != NULL and currentrenderpass->depthcomponent == true) currentdrawbuffer->depthtexture->imagelayout[0][currentdrawbuffer->baseface + k] = currentrenderpass->depthAttachment.finalLayout; } currentdrawbuffer = NULL; currentrenderpass = NULL; } Another example where this was causing problems was with user-defined texture buffers. One beta tester wanted to implement some interesting effects that required rendering to some HDR color textures, but the system was so static it couldn't handle a user-defined color format in a texture buffer. Again, this is not impossible to overcome, but the practical outcome is I just didn't have enough time because resources are finite.
    It's interesting that this extension also removes the need to create a Vulkan framebuffer object. I guess that means you can just start rendering to any combination of textures you want, so long as they use a format that is renderable by the hardware. Vulkan certainly changes a lot of conceptions we had in OpenGL.
    So this extension does eliminate a significant source of problems for me, and I am happy it was implemented.
  3. Josh
    I'm finalizing the shaders, and I was able to pack a lot of extra data into a single entity 4x4 matrix:
    Orthogonal 4x4 matrix RGBA color Per-entity texture mapping offset (U/V), scale (U/V), and rotation Bitwise entity flags for various settings Linear and rotational velocity (for motion blur) Skeleton ID All of that info can be fit into just 64 bytes. The shaders now use a lot of snazzy new function like this:
    void ExtractEntityInfo(in uint id, out mat4 mat, out vec4 color, out int skeletonID, out uint flags) mat4 ExtractCameraProjectionMatrix(in uint cameraID, in int eye) So all the storage / decompression routines are in one place instead of hard-coding a lot of matrix offset in different shaders.
    A per-entity linear and angular velocity is being calculated and sent to the rendering thread. This is not related to the entity velocity in the physics simulation, although they will most often be the same. Rather, this is just a measure of the distance the entity moved since the last world render, so it will work with any motion, physics or otherwise.
    In the video below, the linear velocity is being added to the camera position to help predict the region of the GI lighting area, so the camera stays roughly in the center as the GI updates. A single 128x128x128 volume texture is being used here, with a voxel size of 0.25 meters. In the final version, you probably want to use 3-4 stages, maybe using a 64x64x64 texture for each stage.
    Performance of our voxel cone step tracing is quite good, with FPS in the mid-hundreds on a mid-range notebook GPU. The last big question is how to handle dynamic objects, particular fast-moving dynamic objects. I have some ideas but I'm not sure how hard this is going to be. My first attempt was slow and ugly. You can probably guess which dragon is static and which is dynamic:

    It is a difficult problem. I would like the solution to be as cool as the rest of this system as I finish up this feature. Also, I think you will probably want to still use SSAO so dynamic objects blend into the GI lighting better.
    Everyone should have access to fast real-time global illumination for games.
  4. Josh
    It's 12:30 in the morning, but I have the model scripts working the way I want. Thanks to "Nilium" for his tips, and to everyone who gave their feedback. I am confident in this revision. I have a lot of scripts that need to be slightly altered, but it's just a matter of adjusting the function structure, not really changing any existing code.
     
    Here's the light_ambient script:

    require("scripts/class") require("scripts/linkedlist") local class=CreateClass(...) function class:Spawn(model) local object=self.super:Spawn(model) function object:Update() AmbientLight(self.model.color) end function object:SetKey(key,value) if key=="color" then local returnvalue=self.super:SetKey(key,value) self:Update() return returnvalue elseif key=="intensity" then local returnvalue=self.super:SetKey(key,value) self:Update() return returnvalue else return self.super:SetKey(key,value) end return 1 end function object:Kill(model) local model,object --Iterate through all instances of this class. --If any instances are found, use them to set --the ambient light value. for model,object in pairs(self.class.instances) do if object~=self then object:Update() break end end self.super:Kill() end object.model:SetKey("intensity","0.5") --Call the update function before finishing. --This can only be called after the function is declared. --We don't actually need to call it here since setting the intensity key --will do that for us, but it's still a good idea as a general rule. object:Update() end function class:Cleanup() --Restore default ambient light setting. AmbientLight(Vec3(0.5,0.5,0.5)) self.super:Cleanup() end
  5. Josh
    The last two weeks were mostly me wrestling with C++. I had promising results with a meta language that translates to C++ and compiles, and also learned how to create makefiles as a result of that. I tried CodeLite and CodeBlocks, in addition to MS Visual Studio. I finally settled on CodeBlocks.
     
    Codeblocks can be configured pretty easily to use GCC, the MS C++ compiler that Visual Studio uses, as well as about a dozen others (very curious to see if LE3 compiled with the Intel compiler is faster!) I prefer GCC with no optimizations for development, because it builds about twice as fast as the MS C++ compiler. Although I find Scintilla to be slow and somewhat poorly written, it's nice to have a cross-platform code editor. I can also configure Codeblocks to stop on the first error (it's actually a GCC compiler setting) and it automatically opens the file and selects the line the first error occurs on (CodeLite does not). Because I realistically expect to make thousands of coding errors during the production of LE3, it makes sense to streamline this highly repetitive task any way possible, and for that reason Codeblocks wins.
     
    The meta language made a promising start, and I may offer it as a product in the future, but it will not be used to develop Leadwerks 3.
     
    I've started on the OpenGL4 renderer. First I create the base renderer class, and established what the commands would actually do. Next I am going to create a window in Windows, set the pixel format, and initialize an OpenGL context. I've done all of these tasks from scratch in BlitzMax, but it's been a long time since I dealt with any of that.
     
    One of the most confusing parts for me was pointers versus references. In C++, pointers act pretty much like an object in Blitz3D: When you declare the variable, it's null, you create a new object and assign the variable to it, you delete the object manually when you are done with it. Now if you declare a variable that is a class, it creates it instantly and it can never be Null; it just exists. When the variable goes out of scope, it gets deleted/destructed automatically. This was pretty confusing until I realized that these objects are just like when you declare a float or integer variable. You don't worry about cleaning it up, because it only exists within the scope of the function it's declared in. You can never set an integer to Null, because it just exists; it doesn't get deleted. In the same way, a Vec3 variable doesn't need to be cleaned up or deleted; it just exists within the scope it is declared in. Therefore, it makes perfect sense for math classes like Vec3, Vec4, Mat4, etc. to always be used as a regular variable, and for complex objects like entities, materials, etc., to always be declared as pointers.
     
    I had another look at the Android SDK, and decided an Android game development would come in the form of a static lib for use with Java (or pure C++ if you prefer and can set it up). I'm cautious about the iPhone, but because Leadwerks is a code-centric development approach, I don't think we will have much trouble staying in compliance with the Apple EULA. I sort of understand Apple's strictness. I love my HTC Evo, but there's so much on it that just doesn't work. The app store is full of horrible crappy games. The App Installer even crashes on startup. iPhone gets Doom 3 by id Software, and Android gets 10 versions of Doom classic compiled by some kid, and none of them work. If Apple can deliver a good and consistent user experience, then I don't really have any problem with the restrictions they give developers.
     
    You can now add your SteamID to your Werkspace profile and display your Steam status. It may help a few people to join up for some fun multiplayer challenges, but I am mostly interested in learning about the multiplayer features Valve has implemented in Steam. Join the Leadwerks Developer Steam Group and be sure to keep a couple games in your wishlist. Thanks to HurrBeDragns for reserving the URL for me!
     
    I want strong multiplayer support in LE3, and I like some of the things Valve has done. In LE3, all entities can be saved to a binary file format. We have learned how troublesome serialization can be, so we won't be using that, but this binary file format can be used for prefabs, and even for syncing entities over the network. I've already had success specifying an entity should be synced over a network. It's really nice to have everything just work automatically. I'm in favor of just letting the server handle all interaction, with no client-side prediction because: 1. client-side prediction is inaccurate and wrong, 2. It was originally designed for 56k modems, and 3. Onlive has proven that player response lag can be minimal even when streaming graphics in real-time. The design I used in the multiplayer tests I conducted a few month ago was certainly good, and the results were responsive enough as long as the players were on the same continent.
     
    The platforms I am most interested in are Windows and MacOS, now that the new iMacs are shipping with decent graphics cards. The GPUs the new iMacs come with range from an ATI Radeon 4670 up to a Radeon 5750. By following the ATI/NVidia rule and dividing the number of ATI stream processors by 5, we can deduce these cards to be the equivalent from about an NVidia GEForce 8600, to a bit better than a GEForce 9800 GTX. This is definitely within the range of Leadwerks hardware requirements, so I look forward to developing for MacOS in the near future. And it's a good excuse for me to get a 27" iMac. I did not like the direction Microsoft went with Windows 7, and the new OS has finally put me in a place where I am willing to try something new. That never would have been this case in the days of Windows XP. Just so you know I am not turning into a total Apple fanboy, it is funny they make a big deal out of having a separate graphics processor...I had a 3DFX card in 1997, and they are only now catching onto the idea of dedicated GPUs!
     
    That's pretty much it for now. Stop on by the Asset Store and grab the Industrial Cargo Model Pack for only $9.95 before the price goes up tomorrow.
  6. Josh
    There sure is a lot to do around here!
     
    -Character animation example. I want to get the soldier model up and running with basic animations so everyone can use it.
    -Network improvements. I want to automate entity movement and some other features.
    -Scene graph improvements and vegetation collision.
    -Website and forum integration.
    -Revamping some of the tutorials.
     
    Watch the character movement in this video. I want to set up something simple like that so we can run around and have fun:


     
    Also need to get a new public demo out, but I won't release one until ATI fixes their cubemap bug! Apparently, the report has gone to the highest levels of AMD, and it is in the process of being fixed.
  7. Josh
    The networking code is pretty much done, but I don't think it's quite ready to release. We're redesigning the site to integrate the forum and community features together with the whole site, and an official documentation section is being created. In the meantime, I am going to start the model editor. This will be an app to replace the "Model Viewer" which will allow viewing and resaving of .gmf files. It will also include some editing features so you can adjust some of your meshes that might not have been converted or exported successfully. I'm planning for the app to be both integrated into the main editor, and compiled as a standalone app, the same way the script editor is. I'm also kicking around an idea for a GUI system.
  8. Josh
    I dabbled in a lot of little things recently, like GUI code and networking, just to make sure things would work the way I want in the long run. The lighting optimizations weren't planned to come out yet, but were moved to the front due to one user who is finishing up a commercial game. I've also been helping Pure3D a little bit, but I can't say anything about that right now. Of course, our website is still in progress, and that will hopefully come together by the end of the month.
     
    We'll have some cool stuff for you guys at the GDC. If you happen to be going, speak up now!
     
    There are two big things I want to complete in the next two months. They are:
     

    Art pipeline enhancements. The plan is to have a model viewer and editor integrated into the main editor. There's lots of little things like normals, binormal/tangents, physics, and the mesh hierarchy that can be difficult to export properly. This tool will give the user more control over their assets, and make it easier to import content from a variety of sources.
     
    Vegetation system improvements. Some parts of the vegetation rendering routines will be rewritten for better speed with large numbers of layers. I'll also add collision, while I am in there.
  9. Josh
    I decided I want the voxel GI system to render direct lighting on the graphics card, so in order to make that happen I need working lights and shadows in the new renderer. Tomorrow I am going to start my implementation of clustered forward rendering to replace the deferred renderer in the next game engine. This works by dividing the camera frustum up into sectors, as shown below.

    A list of visible lights for each cell is sent to the GPU. If you think about it, this is really another voxel algorithm. The whole idea of voxels is that it costs too much processing power to calculate something expensive for each pixel, so lets calculate it for a 3D grid of volumes and then grab those settings for each pixel inside the volume. In the case of real-time global illumination, we also do a linear blend between the values based on the pixel position.
    Here's a diagram of a spherical point light lying on the frustum.

    But if we skew the frustum so that the lines are all perpendicular, we can see this is actually a voxel problem, and it's the light that is warped in a funny way, not the frustum. I couldn't figure out how to warp the sphere exactly right, but it's something like this.

    For each pixel that is rendered, you transform it to the perpendicular grid above and perform lighting using only the lights that are present in that cell. This tecnnique seems like a no-brainer, but it would not have been possible to do this when our deferred renderer first came to be. GPUs were not nearly as flexible back then as they are now, and things like a variable-length for loop would be a big no-no.

    Well, something else interesting occurred to me while I was going over this. The new engine is an ambitious project, with a brand new editor to be built from scratch. That's going to take a lot of time. There's a lot of interest in the features I am working on now, and I would like to get them out sooner rather than later. It might be possible to incorporate the clustered forward renderer and voxel GI into Leadwerks Game Engine 4 (at which point I would probably call it 5) but keep the old engine architecture. This would give Leadwerks a big performance boost (not as big as the new architecture, but still probably 2-3x in some situations). The visuals would also make a giant leap forward into the future. And it might even be possible to release in time for Christmas. All the shaders would have to be modified, but if you just updated your project everything would run in the new Leadwerks Game Engine 5 without any problem. This would need to be a paid update, probably with a new app ID on Steam. The current Workshop contents would not be accessible from the new app ID, but we have the Marketplace for that.
    This would also have the benefit of bringing the editor up to date with the new rendering methods, which would mean the existing editor could be used seamlessly with the new engine. We presently can't do this because the new engine and Leadwerks 4 use completely different shaders.
    This could solve a lot of problems and give us a much smoother transition from here to where we want to go in the future:
    Leadwerks Game Engine 4 (deferred rendering, existing editor) [Now] Leadwerks Game Engine 5 (clustered forward rendering, real-time GI, PBR materials, existing architecture, existing editor) [Christmas 2018] Turbo Game Engine (clustered forward rendering, new architecture,  new editor) [April 2020] I just thought of this a couple hours ago, so I can't say right now for sure if we will go this route, but we will see. No matter what, I want to get a version 4.6 out first with a few features and fixes.
    You can read more about clustered forward rendering in this article
  10. Josh
    There were numerous small issues related to the 2.32 release, but I am chipping away at them:
    http://leadwerks.com/werkspace/index.php?app=tracker&showproject=1&catfilter=2
     
    I'm not going to add any new features until every bug is fixed.
     
    Remember that the more complete and easy to reproduce a report is, the faster it will be to fix. Thanks for the feedback!
  11. Josh
    The built-in level design tools in Leadwerks3D are great for quickly sketching out a game level. Most of this remains unchanged from the design of 3D World Studio, with a few extras like smooth groups and new primitive types:

     
    When a point entity object type is selected in the side panel, the object creation widget changes to a simple cross hair:

     
    The selection box with tabs that CSG brushes use are great for quickly moving, scaling, rotating, and shearing brushes, but they aren't that great for point entities. That's why I'm implementing a different control style for selected point entities. A 3D widget like 3DS Max uses will appear in the 3D and 2D viewports:

     
    To give the user more control, they can choose between global and local coordinate systems to move and rotate entities:

     
    We think this will provide the best combination of fast level editing with CSG brushes, and quick object manipulation for other types of objects.
     
    --EDIT--
     
    Here's an updated view of the 3D control widget. Of all the implementations of this I have seen, I like the Crysis Editor's the best. The next step will be to make the widget change appearance when the mouse is hovered over it. As has been requested in the past, the widget stays on top of everything drawn, and always scales to appear the same size at any distance.

  12. Josh
    Two issues in the art pipeline were giving me some doubt. I stopped working on them a couple weeks ago, and I'm glad I did because the solutions became clear to me.
     

    Shader Files
    First, there is the matter of shaders. A shader can actually consist of half a dozen files:bumpmapped.opengl3.vert
    bumpmapped.opengl3.frag
    bumpmapped.opengl4.vert
    bumpmapped.opengl4.vert
    bumpmapped.opengles.vert
    bumpmapped.opengles.vert
     
    I originally thought the .shd extension would be good for shaders, in keeping with our extension naming scheme, but there's actually no information an .shd file even needs to contain! I also considered creating a simple format that would pack all the shader strings into a single file, and display the different ones for whatever graphics driver was in use at the time, but that approach reeked of future confusion. I'd like to still be able to open .vert and .frag files in Notepad++ or another text editor.
     
    I came up with the idea to use a .shd file that contains all these shader files, but instead of packing them into a file format, the .shd file will just be a renamed .pak. That way you can easily extract the original text files, but shaders can be distributed and loaded as a single file.
     

    Material Textures
    I've been working with a system that uses texture names defined in the shader file. It works and it's really cool, but I think I have something even cooler. Instead of requiring a shader to define texture unit names, it would be less confusing to just decide on a fixed naming scheme and stick with it, something like this:texture0 - Diffuse
    texture1 - Normal
    texture2 - Specular
    texture3 - Reflection
    texture4 - Emission
    texture5 - Height
     
    And in C++ you would have constants like this:

    #define TEXTURE_DIFFUSE 0 #define TEXTURE_NORMAL 1 ...
    If you want to create a bumpmapped material, you can simply assign textures 0 and 1, and the material will choose a shader with the assumption those are supposed to be the diffuse and normal map. If the current surface being rendered contains bone weights, an animated version of the shader will be used. If a shader is explicitly defined, that of course will override the material's automatic selection. This way, all you have to do is drag a few textures onto a material, and 95% of the time no shader will even have to be explicitly defined. This is similar to the materials system in 3ds max.
     
    The programmer in me screams that this makes no sense, since texture units are completely arbitrary, but from a usage standpoint it makes sense, since the exceptions to this paradigm probably make up less than 5% of the materials in any scene.
     
    Now for custom shaders, you might end up with a situation where you are dragging a texture into the "reflection" slot, when it's really going to be used to indicate team colors or something, but it's still far simpler to say "set the reflection texture with...". Everyone knows what you mean, textures can be set in code without having to look up the slot number in the shader, and there's no ambiguity.
     
    Anyways, that's just two small design problems overcome that I thought I would share with you. I think the texture issue really demonstrates the difference between engineering and product design.
     

    http://www.youtube.com/watch?v=5U6Hzjz5220
  13. Josh
    The fantastic Fatcow icon set was recommended to me in the comments in a previous blog post. These are very similar to the Silk icon set, but come in 32x32 and 16x16 versions, have more icons, and better coloration. The Silk icon set is sort of a desaturated pastel tone I don't like, and Fat Cow's icons are more saturated and bright. I had actually considered running the Silk icons through a color-correction algorithm, but Fat Cow has already done the work for me.
     
    I knew from the start I wanted our editor to have lots of colorful toyish-looking icons. I love Valve's editing utilities as well as Nem's Tools, even though they are both rather dated. What I miss about these programs is that they embedded 3D tools into the standard Windows UI, so it was easier to focus on the task at hand instead of being distracted by a flashy custom UI you had never seen before. I've always considered homemade GUIs to be amateurish, and even if one looks cool at first, I quickly get tired of it. My goal is to capture the utility and ease of use of programs like this and 3D World Studio, while adding more modern features to make it more friendly to artists.
     
    The editor is starting to take on its own unique character:

     

    Designing the Interface
    What you see here is the result of a lot of different approaches to try to determine the best way to design the layout of a modern 3D editor. 
    The asset browser is built into the main window, but can be resized or hidden by clicking the window divider. The user will frequently be accessing the asset browser, and having it in the main window is better than having it in a separate window, and constantly having to move it around to get it out of the way. Additionally, I felt that having child windows open from a child window (windowed asset browser) was too confusing...more on that below.
     
    The default layout of the asset browser is a vertical split with the treeview on the far right. After using a horizontal split for a while I felt really cramped for space. The vertical layout gives you a better distribution of space and plenty of room to view thumbnails.
     
    Originally, I had the asset files themselves in the tree with small icons, but I found that 16x16 was too small for a meaningful preview of each file. I tried making the tree view icons bigger, but it wasted a lot of space. Folder icons looked ridiculous at 32x32 and above, and took up more room than they needed.
     
    I tried using generic 16x16 "texture", "material", and "model" icons in the treeview, but found it was much easier to identify files by thumbnail rather than reading their name.
     
    Finally the decision was made to keep folders in the tree view, with small icons, and have a separate thumbnail browser with bigger icons. The user can adjust the thumbnail icon size, from 32 to 512. I also liked this approach because it is similar to the texture browser in 3D World Studio.
     
    The folder treeview is on the very right-hand side by default because the white background would cause the dark 2D and 3D viewports to look even darker. Your eye actually will perceive a darker color in an area of high contrast, so keeping the mostly-white treeview off to the side will make your scenes more easily visible.
     
    When you double-click on an asset file, a window for viewing and editing the asset is opened. I considered building a preview window and controls into the main window, but it didn't fit the workflow. Sometimes you will want to drag asset files into an asset editor. For example, you can drag a texture into the material editor to assign a texture to the material. In order for this to work, you have to be able to open the material in the material editor, then navigate the asset browser to where the textiure resides, then click and drag that file into the material editor. This wouldn't work with a single preview window.
     
    There's also a good case for having multiple asset editors open at the same time. You might be working on a shader and want to see its effect on a material in real-time. In that case, you would keep both assets open in their editors while working.
     
    I also considered making the asset editors a separate application the editor opened, but didn't for two reasons. First, drag and drop operations would probably be harder to code, especially across multiple platforms. Second, and most importantly, I wanted the asset editor windows to always appear on top of the main window, and that wouldn't happen if they were external applications.
     
    I also like having the asset editors in their own windows because at any time you can maximize the window and get a nice big fullscreen view of the model, texture, or material you are looking at. This is naturally useful for scripts and shaders as well, where you are editing code and need lots of space.
     
    The font editor is shown, and supports both .ttf and bitmap fonts. I implemented my own .fnt format (I think this is an old Window 3.0 file extension, but it's mine now) and a .ttf to .fnt converter. As with all the Leadwerks 3 converters, the editor automatically handles the conversion for you, so you just drop your .ttf files into your project folder and you're done. I don't know of any cross-platform font maker programs, but I will publish the specification for our font format so they can add exporters. It always seems to work best when we just make our own optimal file formats and publish the spec for everyone to follow.
     

    Bringing Back Level Design
    As I approach our CSG brush implementation, I find myself growing quite excited over something that was originally an annoyance I did not want to deal with, but the users insisted we merge 3D World Studio with Leadwerks 3. When Leadwerks Engine 2 was designed, I thought CSG was obsolete once dynamic lighting became possible, and we no longer needed special geometry for optimal lightmapping. However, with the loss of CSG I think we lost a reference point in our virtual worlds. It was easy to drag out a room in a few minutes for playtesting. CSG is also fantastic for buildings, and can produce results in a fraction of the time it would take with 3ds max or another polygonal modeler. 
    I'm very eager to see what CSG will look like in a modern implementation, especially if I can work hardware tessellation into it. CSG is an area of expertise for Leadwerks as a company, and I am glad to be drawing on that knowledge to bring something new and different to game design tools. I'm looking forward to a return of the lost art of level design, with a modern flair.
     
    Finally, here's a song for you that can be vaguely connected to the subject of this blog, from Alice In Chain's album "Facelift". (See what I did there?)

    http://www.youtube.com/watch?v=QXPA44Ebg_o
  14. Josh
    To make an entity in Leadwerrks3D behave, you can drag a script from the asset browser to the actors list in the entity properties editor. Each entity can have multiple scripts attached to it. Each script can have different variables exposed, with control for the GUI they are displayed in. For example, you can specify upper and lower limits of a spinner or a list of choices for a drop down box. Each script attached to an entity, along with its set of properties, is called an "actor" because...it acts.
     
    To handle the interaction of different actors, a flowgraph is used. You can drag actors from the entity properties editor into the flowgraph editor. Each actor has inputs and outputs defined in its script. You can connect the output of one actor to the input of another. For example, you can connect the "tick" output of a timer script to the "Resume" output of a mover script, to make the mover translate and rotate an entity at regular intervals when the timer ticks:


     
    I hope this makes it more clear how entity interactions can be handled in Leadwerrks3D, even without programming. With this system, we're all working on one big game together, and we can share functional objects without any difficulty.
  15. Josh
    It's pretty amazing how long the materials, textures, and shader system is taking, but it encompasses a lot of areas: Automatic asset reloading, autogenerated shaders, GLSL syntax highlighting, texture conversion options, and more. All this attention to detail is shaping up to create what I hope will be the best art pipeline, ever.
     
    The Asset Browser displays all files in your project directory. You don't have to use Windows Explorer at all anymore. You can rename, move, copy, cut, paste, and delete files, without ever leaving the editor:

     
    All recognized asset files are displayed in the editor, including materials, textures, shaders, sounds, scripts, models, and fonts:

     
    You can edit textures, materials, and shaders, and see how your changes interact in real-time:

  16. Josh
    It's November 1, and this is the longest summer I can remember in a while. Jokes about the seasons aside, last week I had trouble deciding what to do first, so I decided to attack the remaining tasks I was most scared of. This has been my strategy throughout the entire process, and it results in my work becoming progressively easier as we near the finish line.
     
    I finished one big task today that was holding us back. We need a file system watcher for Mac computers to detect changes to the file system. That way when files are created, deleted, modified, or renamed, the editor can detect the change and automatically reconvert and reload assets. This feature allows you to keep Photoshop open and work on a texture, while Leadwerks will display your changes instantly every time you save the file.
     
    I started by asking a few questions on the Mac developer forum on Apple's website. I found one method of doing this with an event stream, but that wasn't recursive. Finally I built some code off the Objective-C example here:
    https://developer.ap...nts/_index.html
     
    Objective-C is frightening (perfect for Halloween!), but after this experience I feel slightly less afraid of it. Once I got the code working, I found that Mac FSEventStreams only give you a folder path; they don't tell you which exact file changed, or whether it was created, deleted, renamed, or modified. Going back to the editor side of things, I added some code that reads the directory structure at startup and stores the file time for each file. Some clever code analyzes a folder when an even occurs, and then is able to emit events based on whether a file was changed, created, deleted, etc.
     
    So what's left to do? Well, here's my exact list:


    Lua Interpreter project for iOS and Android
    Improve flowgraph visual style
    Documentation
    Recursive rendering (like LE2 uses)
    Restructure asset class (I did something stupid here that can be improved)
    Brush rendering batches (to make editor rendering faster)
    Finish skinning in OpenGL ES renderer (copy and paste, mostly)
    Undo system

    And what we finished recently:
    Move project over to VS 2010 (Chris)
    FileSystemWatcher for Mac (today)

    I'm heading down to the hacker lab tonight to talk to the co-founders there. Catch me in Left 4 Dead 2 past about 9 P.M. PST if you want to kill zombies for a round.
  17. Josh
    We decided to make one last minute change to the material editor. The texture list is a bit easier to work with now:

     
    Looking pretty nice on Mac, too:

     
    We're also working on our character models for Darkness Awaits, as well as a super secret special surprise...
  18. Josh
    After a much-needed vacation I'm back, and ready to rock.

    IGDA Summit
    Everything at the IGDA Summit was mobile, mobile, mobile. Based on this kind of behavior in the industry, you'll have to forgive me for thinking at one time that mobile was the next big thing. There's two problems: Perhaps half the companies there were support services to help you monetize your app. I haven't heard the word "monetize" so much since the height of the Social boom, when people had huge numbers of free users and were looking for a way to convert them into paying customers (which didn't happen). I take it as a bad sign. If you are a corn farmer, and a bunch of door to door salesmen come to your farm offering to help you monetize your corn selling business, you probably have a problem.
    At the same time, all the indie developers unanimously said they were having trouble making money, and that the market was very crowded. The general consensus was that it wasn't enough just to make a good mobile game. You had to have a way to promote it. One developer told me his marketing plan was to create a Facebook page. I guess he had never gone that far before, and was confident it would be the answer. Another guy complained he couldn't even give away his totally free game, with no ads or in-app purchases.

     
    So to me right now mobile looks like one of those things where a bunch of companies operate at a loss for a few years in hopes they will get acquired by EA, and then 90% of them disappear. Does not feel like a promising place if you are actually trying to provide goods to customers in exchange for money. (Of course there will still be some major successes, but those seem like the exception.)
     
    I was the only Linux developer at the event, as far as I know. Based on the strength of our recent Kickstarter campaign, Linux is arguably our lead development platform now. Which I would not have guessed a year ago.

    Leadwerks 3.1
    Development of Leadwerks 3.1 for Linux and Steam begins in earnest now. I'm focusing on graphics and terrain first, since those tasks cannot be done by anyone but me. Linux and Steam integration can be performed by the other devs, and I will just contribute to those depending on how much time I have. I made a lot of progress on terrain recently, and my goal is to have the terrain editor implemented by the end of the week. This will be included in the 3.0 version and rolled out when it is available. 
    I will also be attending to any outstanding bug reports. Having the current version 3.0 in use before the graphics upgrade in 3.1 is a great way to evaluate out the editor and make sure everything is solid.
     
    So I am looking forward to the development of 3.1 over the next few months. After all the recent excitement, some quiet coding time sounds great to me.
  19. Josh
    Since I am working with the Blender import pipeline and also getting artists onboard the Steam Workshop, it makes sense to work out any little details in the art pipeline that can be improved. One of these is the texture import process.
     
    Leadwerks 3.0 was designed to function across PC and mobile, and one of the compromises that had to be made was texture compression. There is no universally supported texture compression format on mobile, while the PC has several variations of DXT compression to choose from. It is possible to save textures with DXT compression and then uncompress them on mobile devices, but then you end up increasing your load times. OpenGL 4.4 finally has a universal compression format, but none of the hardware out today supports it. So there's not really a good solution here and this is an example of how mobile was holding back what we could do with the engine.
     
    I've just added controls in the texture editor to select the DXT compression level used. The default is DXT1. The Leadwerks 2 DXT1 compressor had some errors in it that would cause visual artifacts, and the format gets a bad rap overall, but where else can you get 80% of the quality in 12% the video memory? Nowhere, that's where! I fixed the compression errors in my code and I can't tell a difference between most uncompressed textures and the DXT1 version in Leadwerks 3.1. You can also select the compression format, save the texture and see for yourself what you want to use. DXT3, DXT5, and DXT5n are also available.
     
    DXT5n is an interesting format, because it's just DXT5 with the red and alpha channels swapped. In DXT5 compression, the alpha channel has a higher resolution than the red channel, so using this format for normal maps can compress your normal maps while avoiding the visual artifacts texture compression can cause when used with normal maps. If your texture is checked as a normal map and any compressed format is selected, the texture converter will automatically override that selection with DXT6n, in order to prevent mistakes. OpenGL 4 allows a texture swizzle setting so I can set it so a shader will automatically swap and red and alpha channels during a texture lookup. This is another reason I want to focus on modern OpenGL; a fallback renderer would not be able to read these textures correctly, which means normal maps could never use compression.
     

     
    Finally, an unsharp mask filter has been added. Textures work by storing many downsampled copies of the texture all the way down to a 1x1 image. This is why textures ideally need to be power-of-two sized. However, that downsampling process can make images appear blurry at a distance. The new unsharp mask filter will perform a sharpen operation on each level of the mipmap chain. As you can see in the image below, this can retain details that would otherwise be lost. In the shots below I am zooming out of the texture so you can see the lower-resolution mipmaps.
     
    Here is the image without sharpening:

     
    And with:

     
    These additions can be gotten first by opting into the beta branch on Steam. Once a stable build is reached, a full update will be performed on the standalone build and on the default branch on Steam.
  20. Josh
    Wow, two updates in two days! This one adds new features and bug fixes.
     
    Fixes
    http://www.leadwerks.com/werkspace/topic/10019-directional-lights-dont-cast-shadows/
    http://www.leadwerks.com/werkspace/topic/10030-values-in-appearance-tab-ignored/
    http://www.leadwerks.com/werkspace/topic/10020-values-in-entities-general-tab-stuck/
    http://www.leadwerks.com/werkspace/topic/8138-pointlight-viewrange-bug
    http://www.leadwerks.com/werkspace/topic/9919-wobbly-fbx-imports/
     
    New Features
    Global skybox setting! Select the scene root, and choose the skybox you want from the properties editor. You can select either a texture or a material. The new command Camera::SetSkybox will do this in code.
    Export VMF files! You can now use Leadwerks Editor to make maps for Left 4 Dead or other games, if you are so inclined. Texture mapping planes are perfectly preserved in the export:

     

     
    I hope to get a few more bugs fixed and get the Linux build running with Steam before the release of the Leadwerks Workshop.
  21. Josh
    My goal with Leadwerks 3 has always been to give the people what they need to make playable games. What they really need. Creative projects are interesting because it's easy to get derailed. You can lose interest in an idea, or you might avoid a critical problem you don't know how to solve, and instead spend time doing fun little things that make you feel like you're making progress when you're not. For example, let's say I am making a dungeon crawler. I have some basic gameplay but I can't figure out how to make my AI do what I want, so I decide I will spend my time painting flags in Photoshop for the 112 factions my game will have. Avoidance kills projects.
     
    It's easy to see this from the outside, but when it's your own work it's much harder to separate and see the progress from an objective perspective. It requires a sense of balance so you can manage the different aspects of a game in tandem and allow them to grow together, never letting one focus become too disproportionately large so that it consumes other critical aspects. Project management.
     
    From my perspective, I try to focus on the most critical problem that holds back games. One huge obstacle we overcame was AI. AI programming used to be an advanced topic only a few people could get to. Now it is easy and everyone can tinker around with it, because we have a standard system it's built on. Same thing goes with level interactions. Implementing simple button-door interactions used to be the stuff only expert programmers could do, but now we have a standard system for it. This opens up the doors for progress and makes my goal of turning players into makers more achievable.
     
    So at the point we're at now, what is needed most? More features (and bug fixes) are very important, but do these things move the ball forward? Do these things improve our ability to get playable games produced? Not really. I will continue to add new stuff, but we should recognize more features aren't really the bottleneck in game production.
     
    If you look through the recent screenshots on Steam, it's pretty obvious what's missing. You have all these cool screenshots of level design, which I love, but the scenes are lifeless and empty. The AI and flowgraph system are innovative system that make gameplay easy to implement. But something is missing, and I think that is animated characters.
     
    Animated characters are very difficult to produce from scratch. There are large collections of these you can buy online, but they still require hotboxes, scripting, sounds, scaling, and various setup procedures. Although Leadwerks makes the import process as easy as possible, there's some stuff you just can't automate.
     
    Another issue is there actually isn't a lot of usable stuff out there, even the paid items. You have problems with consistency from different artists, target polygon counts, animation quality, etc. It's easy to find stuff that looks good at a glance, but then you find out for some reason or another, it just won't work or is incomplete.
     
    So it seems clear that animated characters are a huge unsolved problem: The demands of producing these are far beyond most indies, and there isn't a real usable source of content they can purchase, when you get into the details of it. I even looked at Mixamo Fuse, and although they try to solve the problem, their characters look cartoonish and their animations are ridiculously expensive. I can get custom animations made for much less than they charge just to use an animation on one single model.
     
    I think providing art content is part of what Leadwerks should provide, though I traditionally have not done this in the past. The prospect of creating every kind of character you could need is daunting. Where to even begin? I think The Game Creators have done a decent job of this with their FPSC model packs, but those took years to build up. This needs to be pursued intelligently, with a plan.
     
    Several years ago I wrote a simple program called Character Shop. I had a set of animations produced, and the program allowed the user to weight meshes to a skeleton and export an animated character. I have no plans at this time to bring this back, but the same basic idea is still valid: build your animations once and use them for all characters:

     
    I've commissioned a set of animations to get started with. These will use a standard human skeleton and have animations that can be used and reused. This can be used by both my own artists working on official content and artists in the Leadwerks community who want to just weight their mesh on a standard skeleton and import the animations in the Leadwerks model editor. Once the animations library is built, the cost of creating additional characters is fairly low.
     
    So the first animated character pack is on the way. The other big need are first-person view weapons, and I will see if these can be handled in the same manner. Fortunately, I don't have to do anything to make these happen, so it's back to coding for me!
  22. Josh
    The Leadwerks community tends to be a pretty mature, technically astute one. This is great because it's easy to get good feedback and help, but we do tend to suffer from a tendency to focus on low-level details. This can cause problems with it delays projects, sometimes infinitely. If all we do is keep reinventing the wheel we'll never get to the automobile.
     
    Preparing the FPS weapons pack is giving me a chance to work on some high-level game behavior I don't get to do much. Sometimes we tend to ignore that stuff as trivial or secondary, but I am discovering the high-level behavior can be just as tricky as low-level features. Just because you have a 3D model doesn't mean that item is game-ready. You still have to add sounds, scripted behavior, and sometimes physics geometry. When you get to scripting, there's quite a lot of issues to deal with:
    Does my gun look and feel like it's really firing?
    How does the player acquire weapons?
    How can weapons be easily placed in a map?
    Does my sound match the model animation, so it feels like they are really reloading the gun?

     
    These issues will also make or break your player's experience, more so than all the low-level goodies you can add. The player won't care about indirect global caustic transparency if your gun handling sucks. The high-level gameplay issues that effect the player immediately are going to be much more relevant to them than intricate graphical details you have to squint to see, or a neural-net AI system, or a procedural dust-settling algorithm. I exaggerate, but you get the idea.
     
    I've talked about these ideas a bit before, and I am trying to move us in a direction more focused on gameplay. My metric for success is the number of games published in the Workshop, or elsewhere. Right now, I think ready-made game content is the most effective way to increase this. The game tournaments are also great (thanks Aggror!) because they force you to release something in a fixed amount of time.
     
    So my thought for the day are "be practical and make fun playable games. Good gameplay is not as easy as you think."
     

  23. Josh
    A new update is available on the beta branch which fixes two bugs.
     
    Undo not working correctly with copied objects:
    http://www.leadwerks.com/werkspace/topic/11888-undo-after-ctrl-duplicate-is-messed-up/
     
    Water plane movement and culling problem:
    http://www.leadwerks.com/werkspace/topic/11895-water-plane-doesnt-cover-all-terrain/
  24. Josh
    The creation of our complete tutorial series, written with the help of Aggror, has had a really good effect because it gives us a specification saying "this is how to use Leadwerks". Instead of having different approaches and opinions scattered around, we can all discuss the user experience and be on the same page. This has led to improved user feedback, as we are able to more easily identify any small hangups or inefficiencies in the workflow. I'd especially like to thank the user Malachi for his thorough reporting as he has gone through the lessons.
     
    An update is now available on the beta branch that includes various fixes and improvements.
     
    The behavior of the file requester on Linux has been improved. I found it sometimes did not start in the correct directory, and improved this behavior.
     
    Right-click zoom in the texture editor is fixed.
     
    The .map file extension will automatically be added in Linux when a file is saved, if it is not specified (although technically this was not a bug, just the way the GTK file requester works).
     
    Some missing files have been added to the tutorials project and the documentation has been brought in sync with the samples.
     
    A bug that could cause the model editor to crash when closing the window and then saving the model has been fixed.
     
    You can now load any image file as a terrain heightmap.
     
    These changes are available now on the beta branch on Steam.
×
×
  • Create New...