Jump to content

Josh

Staff
  • Posts

    23,139
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    In Turbo (Leadwerks 5) all asset types have a list of asset loader objects for loading different file formats. There are a number of built-in loaders for different file formats, but you can add your own by deriving the AssetLoader class or creating a script-based loader. Another new feature is that any scripts in the "Scripts/Start" folder get run when your game starts. Put those together, and you can add support for a new model or texture file format just by dropping a script in your project.
    The following script can be used to add support for loading RAW image files as a model heightmap.
    function LoadModelRaw(stream, asset, flags) --Calculate and verify heightmap size - expects 2 bytes per terrain point, power-of-two sized local datasize = stream:GetSize() local pointsize = 2 local points = datasize / pointsize if points * pointsize ~= datasize then return nil end local size = math.sqrt(points) if size * size ~= points then return nil end if math.pow(2, math.log(size) / math.log(2)) ~= size then return nil end --Create model local modelbase = ModelBase(asset) modelbase.model = CreateModel(nil) local mesh = modelbase.model:AddMesh() --Build mesh from height data local x,y,height,v local textureScale = 4 local terrainHeight = 100 for x = 1, size do for y = 1, size do height = stream:ReadUShort() / 65536 v = mesh.AddVertex(x,height * terrainHeight,y, 0,1,0, x/textureScale, y/textureScale, 0,0, 1,1,1,1) if x > 1 and y > 1 then mesh:AddTriangle(v, v - size - 1, v - size) mesh:AddTriangle(v, v - 1, v - size - 1) end end end --Finalize the mesh mesh:UpdateBounds() mesh:UpdateNormals() mesh:UpdateTangents() mesh:Lock() --Finalize the model modelbase.model:UpdateBounds() modelbase.model:SetShape(CreateShape(mesh)) return true end AddModelLoader(LoadModelRaw) Loading a heightmap is just like loading any other model file:
    auto model = LoadModel(world,"Models/Terrain/island.r16"); This will provide a temporary solution for terrain until the full system is finished.
  2. Josh
    Having completed a hard-coded rendering pipeline for one single shader, I am now working to create a more flexible system that can handle multiple material and shader definitions. If there's one way I can describe Vulkan, it's "take every single possible OpenGL setting, put it into a structure, and create an immutable cached object based on those settings that you can then use and reuse". This design is pretty rigid, but it's one of the reasons Vulkan is giving us an 80% performance increase over OpenGL. Something as simple as disabling backface culling requires recreation of the entire graphics pipeline, and I think this option is going away. The only thing we use it for is the underside of tree branches and fronds, so that light appears to shine through them, but that is not really correct lighting. If you shine a flashlight on the underside of the palm frond it won't brighten the surface if we are just showing the result of the backface lighting.

    A more correct way to do this would be to calculate the lighting for the surface normal, and for the reverse vector, and then add the results together for the final color. In order to give the geometry faces for both direction, a plugin could be added that adds reverse triangles for all the faces of a selected part of the model in the model editor. At first the design of Vulkan feels restrictive, but I also appreciate the fact that it has a design goal other than "let's just do what feels good".
    Using indirect drawing in Vulkan, we can create batches of batches, sorted by shader. This feature is also available in OpenGL, and in fact is used in our vegetation rendering system. Of course the code for all this is quite complex. Draw commands, instance IDs, material IDs, entity 4x4 matrices, and material data all has to be uploaded to the GPU in memory buffers, some of which are more or less static, and some of which are updated each frame, and some for each new visibility set. It is complicated stuff, but after some time I was able to get it working. The screenshot below shows a scene with five unique objects being drawn in one single draw call, and accessing two different materials with different diffuse colors. That means an entire complex scene like The Zone will be rendered in one or just a few passes, with the GPU treating all geometry as if it was a single collapsed object, even as different objects are hidden and shown. Everyone knows that instanced rendering is faster than unique objects, but at some point the number of batches can get high enough to be a bottleneck. Indirect rendering batches the batches to eliminate this slowdown.

    This is one of the features that will help our new renderer run an order of magnitude faster, for high-performance VR and regular 3D games.
  3. Josh
    I finally got a textured surface rendering in Vulkan so we now have officially surpassed StarFox (SNES) graphics:

    Although StarFox did have distance fog. ?

    Vulkan uses a sort of "baked" graphics pipeline. Each surface you want to render uses an object you have to create in code that contains all material, texture, shader, and other settings. There is no concept of "just change this one setting" like in OpenGL. Consequently, the new renderer may be a bit more rigid than what Leadwerks 4 uses, in the interest of speed. For example, the idea of 2D drawing commands you call each frame is absolutely a no-go. (This was likely anyways, due to the multithreaded design.) A better approach for that would be to use persistent 2D primitive objects you create and destroy. I won't lose any sleep over this because our overarching design goal is performance.
    Right now I have everything hard-coded and am using only one shader and one texture, in a single graphics pipeline object. Next I need to make this more dynamic so that a new graphics pipeline can be created whenever a new combination of settings is needed. A graphics pipeline object corresponds pretty closely to a material. I am leaning towards storing a lot of settings we presently store in texture files in material files instead. This does also resolve the problem of storing these extra settings in a DDS file. Textures become more of a dumb image format while material settings are used to control them. Vulkan is a "closer to the metal" API and that may pull the engine in that direction a bit. That's not bad.
    I like using JSON data for file formats, so the new material files might look something like this:
    { "material": { "color": "1.0, 1.0, 1.0, 1.0", "albedoMap": { "file": "brick01_albedo.dds", "addressModeU": "repeat", "addressModeV": "repeat", "addressModeW": "repeat", "filter": "linear" }, "normalMap": { "file": "brick01_normal.dds", "addressModeU": "repeat", "addressModeV": "repeat", "addressModeW": "repeat", "filter": "linear" }, "metalRoughnessMap": { "file": "brick01_metalRoughness.dds", "addressModeU": "repeat", "addressModeV": "repeat", "addressModeW": "repeat", "filter": "linear" }, "emissiveMap": { "file": "brick01_emissive.dds", "addressModeU": "repeat", "addressModeV": "repeat", "addressModeW": "repeat", "filter": "linear" } } } Of course getting this to work in Vulkan required another mountain of code, but I am starting to get the hang of it.
  4. Josh
    I was going to write about my thoughts on Vulkan, about what I like and don't like, what could be improved, and what ramifications this has for developers and the industry. But it doesn't matter what I think. This is the way things are going, and I have no say in that. I can only respond to these big industry-wide changes and make it work to my advantage. Overall, Vulkan does help us, in both a technical and business sense. That's as much as I feel like explaining.

    Beta subscribers can try the demo out here:
    This is the code it takes to add a depth buffer to the swap chain ?:
    //---------------------------------------------------------------- // Depth attachment //---------------------------------------------------------------- auto depthformat = VK_FORMAT_D24_UNORM_S8_UINT; VkImage depthimage = nullptr; VkImageCreateInfo image_info = {}; image_info.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO; image_info.pNext = NULL; image_info.imageType = VK_IMAGE_TYPE_2D; image_info.format = depthformat; image_info.extent.width = chaininfo.imageExtent.width; image_info.extent.height = chaininfo.imageExtent.height; image_info.extent.depth = 1; image_info.mipLevels = 1; image_info.arrayLayers = 1; image_info.samples = VK_SAMPLE_COUNT_1_BIT; image_info.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED; image_info.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT; image_info.queueFamilyIndexCount = 0; image_info.pQueueFamilyIndices = NULL; image_info.sharingMode = VK_SHARING_MODE_EXCLUSIVE; image_info.flags = 0; vkCreateImage(device->device, &image_info, nullptr, &depthimage); VkMemoryRequirements memRequirements; vkGetImageMemoryRequirements(device->device, depthimage, &memRequirements); VmaAllocation alllocation = {}; VmaAllocationInfo allocinfo = {}; VmaAllocationCreateInfo allocCreateInfo = {}; allocCreateInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY; VkAssert(vmaAllocateMemory(GameEngine::Get()->renderingthreadmanager->instance->allocator, &memRequirements, &allocCreateInfo, &alllocation, &allocinfo)); VkAssert(vkBindImageMemory(device->device, depthimage, allocinfo.deviceMemory, allocinfo.offset)); VkImageView depthImageView; VkImageViewCreateInfo view_info = {}; view_info.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO; view_info.pNext = NULL; view_info.image = depthimage; view_info.format = depthformat; view_info.components.r = VK_COMPONENT_SWIZZLE_R; view_info.components.g = VK_COMPONENT_SWIZZLE_G; view_info.components.b = VK_COMPONENT_SWIZZLE_B; view_info.components.a = VK_COMPONENT_SWIZZLE_A; view_info.subresourceRange.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT; view_info.subresourceRange.baseMipLevel = 0; view_info.subresourceRange.levelCount = 1; view_info.subresourceRange.baseArrayLayer = 0; view_info.subresourceRange.layerCount = 1; view_info.viewType = VK_IMAGE_VIEW_TYPE_2D; view_info.flags = 0; VkAssert(vkCreateImageView(device->device, &view_info, NULL, &depthImageView)); VkAttachmentDescription depthAttachment = {}; depthAttachment.format = depthformat; depthAttachment.samples = VK_SAMPLE_COUNT_1_BIT; depthAttachment.loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR; depthAttachment.storeOp = VK_ATTACHMENT_STORE_OP_DONT_CARE; depthAttachment.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE; depthAttachment.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE; depthAttachment.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED; depthAttachment.finalLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL; VkAttachmentReference depthAttachmentRef = {}; depthAttachmentRef.attachment = 1; depthAttachmentRef.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL; VkPipelineDepthStencilStateCreateInfo depthStencil = {}; depthStencil.sType = VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO; depthStencil.depthTestEnable = VK_TRUE; depthStencil.depthWriteEnable = VK_TRUE; depthStencil.depthCompareOp = VK_COMPARE_OP_LESS; depthStencil.depthBoundsTestEnable = VK_FALSE; depthStencil.minDepthBounds = 0.0f; depthStencil.maxDepthBounds = 1.0f; depthStencil.stencilTestEnable = VK_FALSE; depthStencil.front = {}; depthStencil.back = {}; I was hoping I would put a month into it and be up to speed with where we were with OpenGL, but it is much more complicated than that. Using Vulkan is going to be tough but we will get through it and I think the benefits will be worthwhile:
    Vulkan makes our new renderer 80% faster Better compatibility (Mac, Intel on Linux) There's a lot of demand for Vulkan products thanks to Khronos and Valve's promotion.
  5. Josh
    I've now got the Vulkan renderer drawing multiple different models in one single pass. This is done by merging all mesh geometry into one single vertex and indice buffer and using indirect drawing. I implemented this originally in OpenGL and was able to translate the technique over to Vulkan. This can allow an entire scene to be drawn in just one or a few draw calls. This will make a tremendous improvement in performance in complex scenes like The Zone. In that scene in Leadwerks the slow step is the rendering routine on the CPU churning through thousands of OpenGL commands, and this design effectively eliminates that entire bottleneck.

    There is no depth buffer in use in the above image, so some triangles appear on top of others they are behind.
    Vulkan provides a lot of control when transferring memory into VRAM, and as a result we saw an 80% performance improvement over OpenGL in our first performance comparison. I have set up a system that uses staging buffers to transfer bits of memory from the CPU into shared memory buffers on the GPU. Another interesting capability is the ability to transfer multiple chunks of data between buffers in just one command.
    However, that control comes at a cost of complexity. At the moment, the above code works fine on Intel graphics but crashes on my discrete Nvidia card. This makes sense because of the way Vulkan handles memory. You have to explicitly synchronize memory yourself using a pipeline barrier. Since Intel graphics just uses system memory I don't think it will have any problems with memory synchronization like a discrete card will.
    That will be the next step, and it is really a complex topic, but my usage of it will be limited, so I think in the end my own code will turn out to be pretty simple. I expect Vulkan 2.0 will probably introduce a lot of simplified paths that will become the default, because this stuff is really just too hard for both beginners and experts. There’s no reason for memory to not be synced automatically and you’re just playing with fire otherwise.
  6. Josh
    Using my box test of over 100,000 boxes, I can compare performance in the new engine using OpenGL and Vulkan side by side. The results are astounding.
    Our new engine uses extensive multithreading to perform culling and rendering on separate threads, bringing down the time the GPU sits around waiting for the CPU to nearly zero.
    Hardware: Nvidia GEForce GTX 1070 (notebook)
    OpenGL: ~380 FPS

    Vulkan 700+ FPS. FRAPS does not work with Vulkan, so the only FPS counter I have is the Steam one, and the text is very small.

    Vulkan clearly alleviates the data transfer bottleneck the OpenGL version experiences. I am not using a depth buffer in the Vulkan renderer yet, and I expect that will further increase the speed. I'm very happy with these results and I think exclusively relying on Vulkan in the future, together with our new engine designed for modern graphics hardware, will give us great outcomes. 
     
  7. Josh
    The Vulkan graphics API is unbelievably complex. To create a render context, you must create a series of images for the front and back buffers (you can create three for triple-buffering). This is called a swap chain. Now, Vulkan operates on the principle of command buffers, which are a list of commands that get sent to the GPU. Guess what? The target image is part of the command buffer! So for each image in your swap chain, you need to maintain a separate command buffer  If anything changes in your program like the camera clearscreen color, you have to recreate the command buffers...all of them! But some of them will still be in use at the time your frame begins, so you need to store a flag that says "recreate this command buffer when it is time to start rendering with this image / command buffer".
    The whole thing is really bad, but admitting that there is any practical limit to the how complex an APi should be opens a developer to ridicule. I make complex technologies easy to use for a living, so I'm just calling it out, this is garbage design. Vulkan is actually good for me because it means fewer people can figure out how to make a game engine, but it's really ridiculous. Khronos has stated that they expect semi-standard open-source code to arise to address these complexities, but I don't see that happening. I'm not going to touch something like AMD's V-EZ because it's just another layer of code that might stop being supported at any time. As a result of the terrible design, Vulkan is going to continue to struggle to gain adoption, and we are now entering an era where the platform holders are in a fight with the application developers about who is responsible for writing graphics drivers.
    I really like some aspects of Vulkan. SPIR-V shaders are great, and I am very glad to be rid of OpenGL's implicit global states, FBO creation, strange resource sharing, and so on. But nobody needs detailed access to the swap chain. Nobody needs to manage their own synchronization. That's what we have graphics drivers for.
    Anyways, here is my test application. The screen will change color when you press the space key, which involves re-creation of the command buffers. The Vulkan stuff is 1300 lines of code.
    vktest3.zip
    The good thing is that although the initial setup is prohibitive, this stuff tends to get compartmentalized away as I add more capabilities, so it gets easier as time goes on. This is very difficult stuff but we will be better off once I get through this.
  8. Josh
    One of the best points of Vulkan is how shaders are loaded from precompiled Spir-V files. This means GLSL shaders either work or they don't. Unlike OpenGL, there is no different outcome on Intel, AMD, or nVidia hardware. SPIR-V files can be compiled using a couple of different utilities. I favor LunarG's compiler because it supports #include directives.
    Shader.vert:
    #version 450 #extension GL_ARB_separate_shader_objects : enable #include "VertexLayout.glsl" layout(push_constant) uniform pushBlock { vec4 materialcolor; } pushConstantsBlock; layout(location = 0) out vec3 fragColor; void main() { gl_Position = vec4(inPosition.xy, 0.0, 1.0); fragColor = inColor.rgb * materialcolor.rgb; } VertexLayout.glsl:
    layout(location = 0) in vec3 inPosition; layout(location = 1) in vec3 inNormal; layout(location = 2) in vec2 inTexCoords0; layout(location = 3) in vec2 inTexCoords1; layout(location = 4) in vec3 inTangent; layout(location = 5) in vec4 inColor; layout(location = 6) in vec4 inBoneWeights; layout(location = 7) in uvec4 inBoneIndices; If the shader compiles successfully, then you don't have to worry about whether it works on different manufacturers' hardware. It just works. So if someone writes a new post-processing effect they don't need to test on other hardware or worry about people asking for help when it doesn't work. Because it always works the same.
    You can try it yourself with these files:
    Shaders.zip
  9. Josh
    I am surprised at how quickly Vulkan development is coming together. The API is ridiculously verbose, but at the same time it eliminates a lot of hidden states and implicit behavior that made OpenGL difficult to work with. I have vertex buffers working now. Vertices in the new engine will always use this layout:
        struct VkVertex     {         float position[3];         float normal[3];         float texcoords0[2];         float texcoords1[2];         float tangent[3];         unsigned char color[4];         unsigned char boneweights[4];         unsigned char boneindices[4];     }; Note there are no longer vertex binormals, as these are calculated in the vertex shader, with the assumption that the texture coordinates have no shearing. There are two sets of UV coordinates available to use. Up to 256 bones per mesh are supported.
    I am creating a few internal classes for Vulkan, out of necessity, and the structure of the new renderer is forming. It's very interesting stuff:
    class VkMesh { public: Vk* environment; VkBuffer vertexBuffer; VmaAllocation allocation; VkBuffer indexBuffer; VmaAllocation indexallocation; VkMesh(); ~VkMesh(); }; I have hit the memory management part of Vulkan. Something that used to be neatly done for you is now pushed onto the developer for no apparent reason. I think this is really pointless because we're all going to end up using a bunch of open-source helper libraries anyways. It's like they are partially open-sourcing the driver.

    You can't just allocate memory buffers as you wish. From vulkan-tutorial.com:
    Nvidia explains it visually. It is better to allocate a smaller number of memory blocks and buffers and split them up:

    I added the Vulkan Memory Allocator library and it works. I honestly have no idea what it is doing, but I am able to delete the Vulkan instance with no errors so that's good.
    Shared contexts are also working so we can have multiple Windows, just like in the OpenGL renderer:

  10. Josh
    When a window in Vulkan resizes you have to manually delete the about a dozen objects and then recreate them with the new size. It's unbelievably complicated. They've pushed all the driver complexity onto the application, in an effort to simplify the job of writing drivers. I can see the advantage to this, because OpenGL drivers in the past were always inconsistent, but it is still shocking how many little details they expose in Vulkan. Just resizing a window and swapping the screen buffer involves fences, semaphores, and all kinds of issues.
    So it is with great pride that I present my RESIZABLE VULKAN WINDOW OF AWESOME POWER!
    vktest2.zip
    Now I will drink CHOSEN ONE ENERGY DRINK!

    In spite of its issues, I kind of like Vulkan, and I don't think it will take long to get the renderer running completely on this new API. I guess Nvidia's raytracing extensions can be used with VK pretty easily too, so that will be interesting in the future.
    The next step is to get multiple shared contexts working.
  11. Josh
    Two days and 823 lines of code later, I present to you the Vulkan triangle of awesomeness, running in our engine:

    Here are my thoughts on Vulkan:
    It's ridiculously verbose. You have to specify every little detail of the rasterizer, there's a million classes to create, and every little variable has to be exactly right. There's really no reason for this because 90% of the code is just something you copy and paste.
    Shaders can use GLSL, which seems very weird, but it makes things easier for us. The GLSL validate tool is used to precompile shaders into SPIR-V code, which works across all hardware, which is very nice. This means that shader code can finally be made closed-source, if you want to. Shaders are compiled into binary .spv files.
    After walking through the steps of setting rendering up in Vulkan, it is more clear to me what is going on under the hood inside the graphics drivers for OpenGL.
    I tried running the app on both Nvidia and Intel graphics, and got the same exact results each time. Vulkan should provide more consistent results across different vendor hardware, and might not require cross-hardware testing. That's a big plus.
    Khronos could have easily supplied a middle layer of C++ code to standardize this. The fact you have to look up all this stuff on a bunch of github repositories and third-party tutorials is awful. It pretty much guarantees that everyone who uses Vulkan is going to do it through a commercial game engine like ours.
    You can download my test application here:
    vktest.zip
    The application will display a lot of errors when you close the window because resources are not being freed correctly yet. The program will probably crash if you resize or maximize the window because this has not been accounted for yet. I think it is still a big challenge to build a full Vulkan renderer that was doing everything our OpenGL renderer was doing, but the fact that we can reuse all our GLSL code helps a lot. Stay tuned!
  12. Josh
    The new game engine needs to roll out with some top-notch examples showing off what it can do. Here's what I want:
    First-person shooter Offroad racing game Space shoot-em-up side-scroller. Side-scoller platformer similar to the Contra Playstation game. Now what I can use your help with is finding good example games on YouTube or Steam that I can start designing these samples around. Post your ideas below!
  13. Josh
    My last NASA project is complete. There's a physics bug in Leadwerks 4.6 that will get resolved this weekend. Starting Monday I am going to focus on the new engine again and move us forward so we can release in 2020. I am really looking forward to getting back in the game.
  14. Josh
    The Model class is being slightly restructured to add support for built-in LOD without the need for separate entities. Previously, a list of surfaces was included in the Model class itself:
    class Model { std::vector<shared_ptr<Surface> > surfaces; }; This is being replaced with a new LOD class, which allows multiple lists of surfaces containing less detail to be stored in the same model:
    class LOD { std::vector<shared_ptr<Surface> > surfaces; }; class Model { std::vector<LOD> lods; }; To iterate through all surfaces in the first LOD, you do this:
    for (int i = 0; i < model->lods[0].surfaces.size(); ++i) { auto surf = lods[0].surfaces[i]; } To iterate through all LODs and all surfaces, you do this:
    for (int n = 0; n < model->lods.size(); ++n) { for (int i = 0; i < model->lods[n].surfaces.size(); ++i) { auto surf = lods[n].surfaces[i]; } } In the future editor, I plan to add a feature to automatically reduce the detail of a mesh, adding the simplified mesh as an additional LOD level so you can automatically generate these.
    How this will work with our super-efficient batching system, I am not sure of yet.
  15. Josh
    This is something I typed up for some colleagues and I thought it might be useful info for C++ programmers.
    To create an object:
    shared_ptr<TypeID> type = make_shared<TypeID>(constructor args…) This is pretty verbose, so I always do this:
    auto type = make_shared<TypeID>(constructor args…) When all references to the shared pointer are gone, the object is instantly deleted. There’s no garbage collection pauses, and deletion is always instant:
    auto thing = make_shared<Thing>(); auto second_ref = thing; thing = NULL; second_ref = NULL;//poof! Shared pointers are fast and thread-safe. (Don’t ask me how.)
    To get a shared pointer within an object’s method, you need to derive the class from “enable_shared_from_this<SharedObject>”. (You can inherit a class from multiple types, remember):
    class SharedObject : public enable_shared_from_this<SharedObject> And you can implement a Self() method like so, if you want:
    shared_ptr<SharedObject> SharedObject::Self() { return shared_from_this(); } Casting a type is done like this:
    auto bird = dynamic_pointer_cast<Bird>(animal); Dynamic pointer casts will return NULL if the animal is not a bird. Static pointer casts don’t have any checks and are a little faster I guess, but there’s no reason to ever use them.
    You cannot call shared_from_this() in the constructor, because the shared pointer does not exist yet, and you cannot call it in the destructor, because the shared pointer is already gone!
    Weak pointers can be used to store a value, but will not prevent the object from being deleted:
    auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //creates a new shared pointer to “thing” auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; thing = NULL; shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //returns NULL! If you want to set a weak pointer’s value to NULL without the object going out of scope, just call reset():
    auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; thingwptr.reset(); shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //returns NULL! Because no garbage collection is used, circular references can occur, but they are rare:
    auto son = make_shared<Child>(); auto daughter = make_shared<Child>(); son->sister = daughter; daughter->brother = son; son = NULL; daughter = NULL;//nothing is deleted! The problem above can be solved by making the sister and brother members weak pointers instead of shared pointers, thus removing the circular references.
    That’s all you need to know!
  16. Josh
    An update is available on the beta branch on Steam with a few bug fixes.
    I'm going to release 4.6 with the current features because a lot of bugs have been fixed since 4.5 and we're overdue for an official release. 4.7 will add a new vehicle system, character crouching physics, and some other things, and will be out later this year.
  17. Josh
    I have not gone in several years because everything we were doing revolved around Steam, and it just didn't seem very important. But this year I had some business to attend to so I spent the last three days in San Francisco.
    I still have a lot of friends in the game industry, and the reaction to my plans for the new engine was very positive. A few years ago people would have groaned at the idea of another engine, but it seems they are now bored with technology and very open to something new. The angle we are taking plays well to the audience. Basically all my predictions about how to sell a new game engine in 2020 were confirmed.
    We do need to have tip-top examples to show off next year, and that starts with good artwork. At this point I am probably only planning to show a first-person shooter, an offroad racing game, and then whatever the NASA team comes up with. And in order to get that done in time, I need to start planning, now!
    I feel very focused on what needs to happen, what is important, and what is not. My idea of what I want is so clear. It doesn't seem that hard to complete.
  18. Josh
    A new update is available on the beta branch on Steam. This adds numerous bug fixes. The Linux build of the editor is compiled with Ubuntu 16.04 and the engine libraries and executables are compiled with Ubuntu 18.04. Linux users, please let me know how this works for you.
  19. Josh
    A new update is available for Leadwerks Game Engine 4.6 beta. This fixes many bugs.
    Slow kinematic joint rotation Heightmap import flipped Map switching crashes in VR Project Manager cancel button bug The Zone DLC map failing to load Ball joints not working Take a look at the bug reports forum to see all the recent fixes in the engine and documentation.
    This is a full update, with new builds for Windows and Linux, for C++ and Lua games. You can opt into the beta branch on Steam to get the update.
  20. Josh
    I realized there are two main ways a plugin is going to be written, either as a Lua script or as a DLL. So I started experimenting with making a JSON file that holds the plugin info and tells the engine where to load it from:
    { "plugin": { "title": "Game Analytics", "description": "Add analytics to your game. Visit www.gameanalytics.com to create your free account.", "author": "© Leadwerks Software. All Rights Reserved.", "url": "https://www.turboengine.com", "library": "GameAnalytics.dll" } } { "plugin": { "title": "Flip Normals", "description": "Reverse faces of model in Model Editor.", "author": "© Leadwerks Software. All Rights Reserved.", "url": "https://www.turboengine.com", "scriptfile": "FlipNormals.lua" } } I like this because the plugin info can be loaded and displayed in the editor without actually loading the plugin.
    I also wanted to try using a JSON file to control script properties. For example, this file "SlidingDoor.json" goes in the same folder as the script and contains all the properties the engine will create when the script is attached to an entity:
    { "script": { "properties": { "enabled": { "label": "Enabled", "type": "boolean", "value": true, "description": "If disabled the door will not move until it is enabled." }, "distance": { "label": "Distance", "type": "float", "value": [1,0,0], "description": "Distance the door should move, in global space." }, "opensound": { "label": "Open Sound", "type": "sound", "value": null, "description": "Sound to play when door opens." }, "closedelay": { "label": "Close Delay", "type": "integer", "value": 2000, "minvalue": 0, "description": "The number of milliseconds a door will stay open before closing again. Set this to 0 to leave open." } } } } I like that it is absolutely explicit, and it allows for more information than the comments allow in Leadwerks 4. There is actually official tools for validating the data. The json data types map very closely to Lua. However, it is more typing than just quickly writing a line of Lua code.
    While we're at it, let's take a look at what a JSON-based scene file format might look like:
    { "scene": { "entities": [ { "type": "Model", "name": "main door", "id": "1c631222-0ec1-11e9-ab14-d663bd873d93", "path": "Models/Doors/door01.gltf", "position": [0,0,0], "euler": [90,0,0], "rotation": [0,0,0,1], "scale": [1,1,1], "matrix": [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]], "mass": 10, "color": [1,1,1,1], "static": false, "scripts": [ { "path": "Scripts/Objects/Doors/SlidingDoor.lua", "properties": { "distance": [1,0,0], "movespeed": 5 } }, { "path": "Scripts/Objects/Effects/Pulse.lua" } ] } ] } }  
  21. Josh
    Previously I talked about array textures acting as "bindless" textures, but there is an actual OpenGL extension that allows a shader to access any texture without the stupid texture binding / slot convention that limits OpenGL 4.0 shaders to a minimum of 16 textures. Implemenation was surprisingly easy, although Mac hardware apparently does not support this extension. When combined with the multi-draw commands in OpenGL 4.3, and some other tricks, it is possible to render multiple sets of objects in one single draw call. Below you can see six instances of three different objects, with different materials applied to them, all rendered in one single command, for ultimate performance.
    This is basically the final goal of the whole crazy architecture I've been working on for over a year.
    I will test this a bit more, release an update for the new engine beta, and then at that point I think it will be time to move everything over to Vulkan / Metal.

  22. Josh
    I'm in DC this week helping the folks at NASA wrap up some projects. I'm going to move back to a supportive role and focus on development of Leadwerks 4.6 and the new engine, and I am helping them to hire some programmers to replace me. We found some very talented people who I am confident will do a fantastic job, and I can't wait to see what they create using Leadwerks Game Engine.
    I helped a team using Leadwerks at NASA get through some big milestones and expand. I hope that someday soon we will be able to tell the story of what happened, because it really has been an amazing experience with some really awesome people. I think it would make a nice movie.
    I will start working hard on Leadwerks 4.6 next Monday., and during March I will finish up my last projects so I can devote all my time to Leadwerks. I'm actually really looking forward to long days of doing nothing but engine coding. We're also going to hold a game tournament this Summer, with prizes. It was worth spending time to help NASA because we picked up a lot of new business customers and it helped focus development of the new engine in a direction everyone seems very happy about, but I need to get back to doing what I love best, which is building great software for you.
  23. Josh
    Leadwerks 5 / Turbo makes extensive use of multithreading. Consequently, the API is stateless and more explicit. There is no such thing as a "current" world or context. Instead, you explicitly pass these variables to the appropriate commands.
    One interesting aspect of this design is code like that below works perfectly fine. See if you can work through it and understand what's going on:
    int main(int argc, const char *argv[]) { //Create a model ;) auto box = CreateBox(nullptr); //Create the world auto world = CreateWorld(); //Create a camera auto camera = CreateCamera(world); camera->Move(0,0,-5); //Create an instance of the model in the new world auto model = box->Instance(world); //Create a window auto window = CreateWindow(); //Create a rendering context auto context = CreateContext(window); while (not window->Closed()) { if (window->KeyDown(KEY_ESCAPE)) window->Close(); world->Update(); world->Render(context); } return 0; }  
×
×
  • Create New...