Jump to content

Josh

Staff
  • Posts

    23,231
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    Here's a look at the new vehicle system that is being developed. The API has been simplified so you simply create a vehicle, add as many tires as you want, and start using it. The AddTire() command now accepts a model and the dimensions of the tire are calculated from the bounds of the mesh.
    class Vehicle { int AddTire(Model* model, bool steering=false, const float spring=500.0f, const float springrelaxation = 0.1f, const float springdamping = 10.0f); void SetGas(const float accel); void SetBrakes(const float brakes); void SetSteering(const float angle); static Vehicle* Create(Entity* entity); }; A script will be provided which turns any vehicle model into a ready-to-use playable vehicle. The script searches for limb names that start with "wheel" or "tire" and turns those limbs into wheels. If they are positioned towards the front of the car, the script assumes the wheels can be turned with steering. You can also reverse the orientation of the vehicle if the model was designed backwards.
    There is one big issue to solve still. When a vehicle drives on flat terrain the tires catch on the edges of the terrain triangles and the whole vehicle bounces around badly. I'm looking into how this can be solved with custom collision overrides. I do not know how long this will take, so it might or might not be ready by Christmas.
  2. Josh
    I borrowed Shadmar's terrain splatting shader. This is the result after fiddling around with it for a few minutes in Leadwerks 3. (No normal mapping yet, but that's easy to add.)
     
    Physics can be added by generating a physics shape from the terrain model. It doesn't allow heightmap editing in the editor, but I think this will serve as a good solution until our super uber mega streaming terrain system is built.
     
    Thanks to Shadmar for the assets.
     
    Klepto is also looking into a few OpenGL ES emulators to see if there is a way to test mobile shaders on a PC.
  3. Josh
    Let's start with some code for making instances and unique copies of a material:

    Material* mat1 = new Material; mat1->SetColor(0,0,1,1); Material* mat2 = mat1->Copy(true); Material* mat3 = mat1->Copy(false); mat1->SetColor(1,0,0,1);
    mat1 and 2 will be red. mat3 will be blue. Shaders, textures, and entities work the same way.
     
    Drawing commands are in. I like how OpenGL3 gets rid of all the built-in matrix stuff and just lets you deal with pure matrix multiplication. It would probably be pretty difficult for a beginner to get into, but it's much cleaner, and it forced me to learn a little more about matrices. I added a mat4 orthogonal projection class function, if you're interested in that sort of thing.
     
    I don't have DrawImage(), SetBlend(), SetColor(), etc. commands in, because the material system can handle all of that, and it's much more powerful. Here's some sample code:

    Material* mat = LoadMaterial("myimage.mat"); mat->SetColor(1,0,1,0.5); mat->SetBlend(BLEND_ALPHA); mat->Enable() DrawRect(2,2,10,20); mat->Disable()
    You can also draw polygons onscreen if you want. Vertex positions will correspond to screen coordinates:

    Material* mat = LoadMaterial("myimage.mat"); Surface* surf = CreateSurface(); surf->AddVertex(0,0,0); surf->AddVertex(0,1,0); surf->AddVertex(0,1,1); surf->AddTriangle(0,1,2); mat->Enable() surf->Draw() mat->Disable()
    There are two types of multisampling in OpenGL. The older technique uses the graphics window pixel format. The newer technique involves an FBO with a multisample format. I am going to disable the first technique, because normally rendering is performed on an FBO (a Leadwerks "buffer") and you don't want multisampling to mess up your 2D drawing that is typically done after 3D rendering. It also prevents the user from having to recreate a graphics window to toggle antialiasing on and off. So to sum that all up, antialiasing should be as simple as just defining a multisample format when you create a buffer, which can be 1,2,4,8, or 16.
     
    I also hired an outside developer to research fluid simulations for ocean water. Here is an early prototype. There's still some improvement to make, but the technique is promising:


     
    On to the editor. Here's the prototype. You can see the asset tree on the right, which functions pretty much like Windows Explorer:

    You can enter a word in the search box, press enter, and the results are instantly filtered. I was surprised at the speed, even with thousands of files:

    You can right-click on a source art asset and convert it to a final game-ready file format. Here we have a png file you can convert to DDS:

    And the familiar DDS convert dialog will appear:

    If you choose the "Reconvert" option, the converter will be run with the last options used to convert that file, without pulling up the options dialog. These settings are stored in a file with the image file, and will be remembered across editor sessions.
     
    One of the coolest features is that the editor automatically detects file changes and will ask you to reconvert a file. Or if you prefer, you can set the editor to always perform conversions automatically.

    Overall, I myself am surprised at the speed with which Leadwerks Engine 3 is taking shape. It's still hard to say exactly when it will be ready, because little details can pop up and take more time, but it's going well.
  4. Josh
    Since you guys are my boss, in a way, I wanted to report to you what I spent the last few days doing.
     
    Early on in the development of Leadwerks 3, I had to figure out a way to handle the management of assets that are shared across many objects. Materials, textures, shaders, and a few other things can be used by many different objects, but they consume resources, so its important for the engine to clean them up when they are no longer needed.
     
    I decided to implement an "Asset" and "AssetReference" class. The AssetReference object contains the actual data, and the Asset object is just a handle to the AssetReference. ("AssetBase" probably would have been a more appropriate name). Each AssetReference can have multiple instances (Assets). When a new Asset is created, the AssetReference's instance count is incremented. When an Asset is deleted, its AssetReference's instance counter is decremented. When the AssetReference instance counter reaches zero, the AssetReference object itself is deleted.
     
    Each different kind of Asset had a class that extended each of these classes. For example, there was the Texture and TextureReference classes. The real OpenGL texture handle was stored in the TextureReference class.
     
    Normal usage would involve deleting extra instances of the object, as follows:

    Material* material = Material::Create(); Texture* texture = Texture::Load("myimage.tex"); material->SetTexture(texture);// This will create a new instance of the texture delete texture;
     
    This isn't a bad setup, but it creates a lot of extra classes. Remember, each of the AssetReference classes actually get extended for the graphics module, so we have the following classes:

    Asset Texture AssetReference TextureReference OpenGL2TextureReference OpenGLES2TextureReference
     
    The "Texture" class is the only class the programmer (you) needs to see or deal with, though.
     
    Anyways, this struck me as a bad design. Several months ago I decided I would redo this before the final release. I got rid of all the weird "Reference" classes and instead used reference counting built into the base Object class, from which these are all derived.
     
    The usage for you, the end user, isn't drastically different:
     

    Material* material = Material::Create(); Texture* texture = Texture::Load("myimage.tex"); material->SetTexture(texture); texture->Release();// Decrements the reference counter and deletes the object when it reaches zero
     
    In the Material's destructor, it will actually decrement each of its texture objects, so they will automatically get cleaned up if they are no longer needed. In the example below, the texture will be deleted from memory at the end of the code:

    Material* material = Material::Create(); Texture* texture = Texture::Load("myimage.tex"); material->SetTexture(texture); texture->Release(); material->Release();// texture's ref count will be decremented to zero and it will be deleted
     
    If you want to make an extra "instance" (not really) of the Asset, just increment the reference counter:
     

    Material* material = Material::Create(); Texture* texture = Texture::Load("myimage.tex"); texture->IncRefCount();// increments ref count from 1 to 2 Texture* tex2 = texture; texture->Release();// decrements ref count from 2 to 1
     
    This makes the code smaller, gets rid of a lot of classes, and I think it will be easier to explain to game studios when I am trying to sell them a source code license.
     
    Naturally any time you make systemic changes to the engine there will be some bugs here and there, but I have the engine and editor running, and mistakes are easy to find when I debug the static library.
     
    I also learned that operation overloading doesn't work with pointers, which i did not know, but had never had a need to try before. Originally, I was going to use operation overloads for the inc/dec ref count commands:

    Texture* tex = Texture::Create();//refcount=1 tex++;//refcount=2 tex--;//refcount=1 tex--;// texture would get deleted here
     
    But that just messes with the pointer! So don't do things like that.
  5. Josh
    The Turbo Game Engine beta is updated! This will allow you to load your own maps in the new engine and see the speed difference the new renderer makes.

    LoadScene() has replaced the LoadMap() function, but it still loads your existing map files. To create a PBR material, insert a line into the material file that says "lightingmodel=1". Blinn-Phong is the default lighting model. The red and green channels on texture unit 2 represent metalness and roughness. You generally don't need to assign shaders to your materials. The engine will automatically select one based on what textures you have. Point and spot lights work. Directional lights do not. Setting the world skybox only affects PBR reflections and Blinn-Phong ambient lighting. No sky will be visible. Physics, scripting, particles, and terrain do not work. Variance shadow maps are in use. There are currently some problems with lines appearing at cubemap seams and some flickering pixels. Objects should always cast a shadow or they won't appear correctly with VSMs. I had to include glew.c in the project because the functions weren't being detected from the static lib. I'm not sure why. The static libraries are huge. The release build is nearly one gigabyte. But when compiled, your executable is small. #include "Leadwerks.h" using namespace Leadwerks; int main(int argc, const char *argv[]) { //Create a window auto window = CreateWindow("MyGame", 0, 0, 1280, 720); //Create a rendering context auto context = CreateContext(window); //Create the world auto world = CreateWorld(); //This only affects reflections at this time world->SetSkybox("Models/Damaged Helmet/papermill.tex"); shared_ptr<Camera> camera; auto scene = LoadScene(world, "Maps/turbotest.map"); for (auto entity : scene->entities) { if (dynamic_pointer_cast<Camera>(entity)) { camera = dynamic_pointer_cast<Camera>(entity); } } auto model = LoadModel(world, "Models/Damaged Helmet/DamagedHelmet.mdl"); model->Move(0, 1, 0); model->SetShadowMode(LIGHT_DYNAMIC, true); //Create a camera if one was not found if (camera == nullptr) { camera = CreateCamera(world); camera->Move(0, 1, -3); } //Set background color camera->SetClearColor(0.15); //Enable camera free look and hide mouse camera->SetFreeLookMode(true); window->HideMouse(); //Create a light auto light = CreateLight(world, LIGHT_POINT); light->SetShadowMode(LIGHT_DYNAMIC | LIGHT_STATIC | LIGHT_CACHED); light->SetPosition(0, 4, -4); light->SetRange(15); while (window->KeyHit(KEY_ESCAPE) == false and window->Closed() == false) { //Rotate model model->Turn(0, 0.5, 0); //Camera movement if (window->KeyDown(Key::A)) camera->Move(-0.1, 0, 0); if (window->KeyDown(Key::D)) camera->Move(0.1, 0, 0); if (window->KeyDown(Key::W)) camera->Move(0, 0, 0.1); if (window->KeyDown(Key::S)) camera->Move(0, 0, -0.1); //Update the world world->Update(); //Render the world world->Render(context); } Shutdown(); return 0; }
  6. Josh
    The Leadwerks Merc character, who I think will go down in history over the next few years second only to the infamous "Crawler", is an experiment. First of all, I wanted a completely custom-made character to developer AI with. This ensured that I was able to get the model made exactly to my specs so that we would have a template for other characters to follow. Fortunately, the script I wrote can easily be used with other models like Arteria's Strike Troop.
     
    The quality of this model is really high and I am really happy to be able to provide high-end content like this for use with Leadwerks. The cost of production was $2965, plus $125 for the weapon model, for a total of $3090. Additional characters can be made at a slightly lower cost, if I choose to go forward with production. At $9.99, we need to sell 309 copies to break even.
     
    Leadwerks model packs sell quite well as DLCs, but so far Workshop Store sales have been slower. It seems that getting people to use the new store is a task itself, because people are unfamiliar with the process.
     
    In the first two weeks, the Merc character has sold 13 copies for a total of $129 in sales (times 70% after the Steam tax). Assuming a flat rate of sales, this means that the character will take one year and four months before sales have covered costs.
     
    This is not an acceptable rate of recovering costs. I am not going forward with additional custom characters based on these results. Not sure how this will go forward, but I am halting all production for now.
  7. Josh
    In the next beta update, the way entity fields work in the script properties will be a little different. Instead of dragging an object from the scene tree onto the field, you will type in the name of the object (or copy and paste the name from the object's name field). This is being done to make the behavior more intuitive. After working with both approaches, I find the new way much easier to use.
     

     
    This does however mean that an object that is used in this manner must have a unique name, since that is what is used to make the linkage. Additionally, prefab limbs can now have their names changed without breaking the prefab, in order to accommodate this design.
     
    Your existing maps will be unaffected until they are resaved in the editor, but if you have multiple target objects with the same name, your map will need to be updated.
     
    Alos, the texture list in the material editor is being reverted back to its original design, which was a bit more obvious in its presentation and usage.
     

  8. Josh
    Diving into the innermost workings of the Linux operating system is a pretty interesting challenge. Pretty much nobody nowadays implements anything with raw X11 anymore. They use QT, SDL, or for drawing Cairo, with Pango for text rendering. The thing is, all of these libraries use X11 as the backend. I need full control and understanding over what my code is doing, so I've opted to cut out the middleman and go directly to the innermost core of Linux.
     
    Today I improved our text rendering by implementing the XFT extension for X11. This library uses FreeType to make True-type Fonts renderable in a Linux window. Although the documentation looks intimidating, usage is actually very simple if you already have a renderable X11 window working.
     
    First, you create an XFT drawable:

    XftDraw* xftdraw = XftDrawCreate(display,drawable,DefaultVisual(display,0),DefaultColormap(display,0));
     
    Now, load your font:

    XftFont* xfont = XftFontOpen(Window::display, DefaultScreen(Window::display), XFT_FAMILY, XftTypeString, "ubuntu", XFT_SIZE, XftTypeDouble, 10.0, NULL);
     
    Drawing is fairly straightforwardish:

    XRenderColor xrcolor; XftColor xftcolor; xrcolor.red = 65535; xrcolor.green = 65535; xrcolor.blue = 65535; xrcolor.alpha = 65535; XftColorAllocValue(display,DefaultVisual(display,0),DefaultColormap(display,0),&xrcolor,&xftcolor); XftDrawString8(xftdraw, &xftcolor, xfont, x, y + GetFontHeight(), (unsigned char*)text.c_str(), text.size()); XftColorFree(display,DefaultVisual(display,0),DefaultColormap(display,0),&xftcolor);
     
    And if you need to set clipping it's easy:

    XftDrawSetClipRectangles(xftdraw,0,0,&xrect,1);
     
    Here is the result, rendered with 11 point Arial font:

     
    As you can see below, sub-pixel antialiasing is used. Character spacing also seems correct:

     
    By using Xft directly we can avoid the extra dependency on Pango, and the resulting text looks great. Next I will be looking at the XRender extension for alpha blending. This would allow us to forgo use of the Cairo graphics library, if it works.
  9. Josh
    Previously I talked about the technical details of hardware tessellation and what it took to make it truly useful. In this article I will talk about some of the implications of this feature and the more advanced ramifications of baking tessellation into Turbo Game Engine as a first-class feature in the 
    Although hardware tessellation has been around for a few years, we don't see it used in games that often. There are two big problems that need to be overcome.
    We need a way to prevent cracks from appearing along edges. We need to display a consistent density of triangles on the screen. Too many polygons is a big problem. I think these issues are the reason you don't really see much use of tessellation in games, even today. However, I think my research this week has created new technology that will allow us to make use of tessellation as an every-day feature in our new Vulkan renderer.
    Per-Vertex Displacement Scale
    Because tessellation displaces vertices, any discrepancy in the distance or direction of the displacement, or any difference in the way neighboring polygons are subdivided, will result in cracks appearing in the mesh.

    To prevent unwanted cracks in mesh geometry I added a per-vertex displacement scale value. I packed this value into the w component of the vertex position, which was not being used. When the displacement strength is set to zero along the edges the cracks disappear:

    Segmented Primitives
    With the ability to control displacement on a per-vertex level, I set about implementing more advanced model primitives. The basic idea is to split up faces so that the edge vertices can have their displacement scale set to zero to eliminate cracks. I started with a segmented plane. This is a patch of triangles with a user-defined size and resolution. The outer-most vertices have a displacement value of 0 and the inner vertices have a displacement of 1. When tessellation is applied to the plane the effect fades out as it reaches the edges of the primitive:

    I then used this formula to create a more advanced box primitive. Along the seam where the edges of each face meet, the displacement smoothly fades out to prevent cracks from appearing.

    The same idea was applied to make segmented cylinders and cones, with displacement disabled along the seams.


    Finally, a new QuadSphere primitive was created using the box formula, and then normalizing each vertex position. This warps the vertices into a round shape, creating a sphere without the texture warping that spherical mapping creates.

    It's amazing how tessellation and displacement can make these simple shapes look amazing. Here is the full list of available commands:
    shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width = 1.0); shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width, const float height, const float depth, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 16); shared_ptr<Model> CreateCone(shared_ptr<World> world, const float radius = 0.5, const float height = 1.0, const int segments = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreateCylinder(shared_ptr<World> world, const float radius = 0.5, const float height=1.0, const int sides = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreatePlane(shared_ptr<World> world, cnst float width=1, const float height=1, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateQuadSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 8); Edge Normals
    I experimented a bit with edges and got some interesting results. If you round the corner by setting the vertex normal to point diagonally, a rounded edge appears.

    If you extend the displacement scale beyond 1.0 you can get a harder extended edge.

    This is something I will experiment with more. I think CSG brush smooth groups could be used to make some really nice level geometry.
    Screen-space Tessellation LOD
    I created an LOD calculation formula that attempts to segment polygons into a target size in screen space. This provides a more uniform distribution of tessellated polygons, regardless of the original geometry. Below are two cylinders created with different segmentation settings, with tessellation disabled:

    And now here are the same meshes with tessellation applied. Although the less-segmented cylinder has more stretched triangles, they both are made up of triangles about the same size.

    Because the calculation works with screen-space coordinates, objects will automatically adjust resolution with distance. Here are two identical cylinders at different distances.

    You can see they have roughly the same distribution of polygons, which is what we want. The same amount of detail will be used to show off displaced edges at any distance.

    We can even set a threshold for the minimum vertex displacement in screen space and use that to eliminate tessellation inside an object and only display extra triangles along the edges.

    This allows you to simply set a target polygon size in screen space without adjusting any per-mesh properties. This method could have prevented the problems Crysis 2 had with polygon density. This also solves the problem that prevented me from using tessellation for terrain. The per-mesh tessellation settings I worked on a couple days ago will be removed since it is not needed.
    Parallax Mapping Fallback
    Finally, I added a simple parallax mapping fallback that gets used when tessellation is disabled. This makes an inexpensive option for low-end machines that still conveys displacement.

    Next I am going to try processing some models that were not designed for tessellation and see if I can use tessellation to add geometric detail to low-poly models without any cracks or artifacts.
  10. Josh
    As I was implementing the collision commands for Leadwerks3D, I noticed a few things that illustrate the differences between the design philosophies of Leadwerks Engine and Leadwerks3D.
     
    You'll easily recognize the new collision commands, although they have been named in a more verbose but logical manner:

    void ClearCollisionResponses(); void SetCollisionResponse(const int& collisiontype0, const int& collisiontype1, const int& response); int GetCollisionResponse(const int& collisiontype0, const int& collisiontype1);
    In Leadwerks Engine, the collisions were left to the end user to define however they wished. In Leadwerks3D, we have some built-in collision types that are declared in the engine source code:

    const int COLLISION_SCENE = 1; const int COLLISION_CHARACTER = 2; const int COLLISION_PROP = 3; const int COLLISION_DEBRIS = 4;
    By default, the following collision responses are created automatically:

    SetCollisionResponse(COLLISION_SCENE,COLLISION_CHARACTER,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_CHARACTER,COLLISION_CHARACTER,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_PROP,COLLISION_CHARACTER,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_DEBRIS,COLLISION_SCENE,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_PROP,COLLISION_PROP,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_DEBRIS,COLLISION_PROP,COLLISION_COLLIDE);
    Each entity's default collision type is COLLISION_PROP, which means that without specifying any collision responses, all entities collide with one another.
     
    Of course if you want to scrap all my suggested collision responses and define your own, you can just call ClearCollisionResponses() at the start of your program. The main difference in design is that Leadwerks3D assumes the user wants some default behavior already specified, instead of being a completely blank slate. That's a design decision that is being implemented across all aspects of the engine to make it easier to get started with.
  11. Josh
    CSG carving now works very nicely. The hardest part was making it work with the undo system, which was a fairly complicated problem. It gets particularly difficult when brushes are in the scene hierarchy as a parent of something or a child of something else. Carving involves the deletion of the original object and creation of new ones, so it is very hard to describe in a step-wise process for the undo system, especially when all manner of objects can be intermixed with the brush hierarchy.
     

     
    Eliminating the ability for brushes to be parented to another object, or another object to be parented to them, would solve this problem but create a new one. Compound brushes such as arches and tubes are created as children of a pivot so that when you select one part, the entire assembly gets selected. Rather than a hierarchy, the old "groups" method from 3D World Studio would be much better here. It becomes a problem when you carve a cylinder out of a brush and are left with a lot of fragments that must be individually selected.
     
    At the same time, the scene tree needs some love. There's too many objects in it. The primary offenders are brushes, which maps just tend to have a lot of, and limbs in character models.
     

     
    Filters like in Visual Studio could help to eliminate some of the noise here. I would set it up to automatically add new brushes into a "Brushes" folder in the scene tree. What would be really nice is to eliminate model subhierarchies from the scene tree. This means the limbs of a model would be effectively invisible to the editor, but would still allow parenting of different objects to one another. In the screenshot below, the crawler skeletal hierarchy is not shown in the editor, but he has a particle emitter parented to him Opening the model in the model editor will still show the entire sub-hierarchy of bones.
     
    Notice the character has a point light as a child, but its own limb hierarchy is tucked away out of sight.
     

     
    If the model sub-hierarchy were not shown in the scene tree, this would free me from having to treat hierarchies as groups, and implement a grouping feature like 3D World Studio had. Filters could be added as well, instead of relying on a pivot to organize the scene hierarchy.
     

     

     
    To sum up, this change would cause group selection to be based on user-defined groups rather than limb hierarchy. The sub-hierarchy of model limbs would no longer be visible in the editor. Objects could still be parented to one another, and they would be parented to the base model rather than a specific limb.
     
    What changes:
    Ability to adjust individual limbs in model hierarchy. For example, if you wanted to add a script to a tank's turret, the turret should be a separate model file you parent to the tank body in the editor. It would not be possible to set the color of one individual limb in a model file, unless you had a script that found the child and did this. Setting a color of the model would affect the entire model, including all limbs.
    Ability to parent a weapon to one specific bone in the editor. Scripts cam still call FindChild() to do this in code. Which is what I suspect most people do anyways, because diving into the mesh limbs to find one specific bone is pretty difficult.

     
    What we gain:
    One button to group / ungroup objects.
    No more confusing multi-selection of entire limb hierarchies.
    Compound brushes no longer need a pivot to hold them together, can be created as a new group.
    Carved brush fragments can be automatically grouped for easy selection.
    If model hierarchy changes, reloaded maps match the model file more accurately; no orientation of limbs in unexpected places.
    Faster, less bloated scene tree without thousands of unwanted objects.
    Better entity selection method to replace drag and drop.
    The scene tree in general feels a lot less weighted down and a lot more manageable.

     
    It is possible this may modify the way some maps appear in the editor, but it would be an "edge case". Improvements are needed to accommodate brush carving and address the present limitations of the scene tree. This does not involve any changes to the engine itself, except updating the map load routine to read the additional data.
  12. Josh

    Articles
    The VK_KHR_dynamic_rendering extension has made its way into Vulkan 1.2.203 and I have implemented this in Ultra Engine. What does it do?
    Instead of creating renderpass objects ahead of time, dynamic rendering allows you to just specify the settings you need as your are performing filling in command buffers with rendering instructions. From the Khronos working group:
    In my experience, post-processing effects is where this hurt the most. The engine has a user-defined stack of post-processing effects, so there are many configurations possible. You had to store and cache a lot of renderpass objects for all possible combinations of settings. It's not impossible but it made things very very complicated. Basically, you have to know every little detail of how the renderpass object is going to be used in advance. I had several different functions like the code below, for initialing renderpasses that were meant to be used at various points in the rendering routine.
    bool RenderPass::InitializePostProcess(shared_ptr<GPUDevice> device, const VkFormat depthformat, const int colorComponents, const bool lastpass) { this->clearmode = clearmode; VkFormat colorformat = __FramebufferColorFormat; this->colorcomponents = colorComponents; if (depthformat != 0) this->depthcomponent = true; this->device = device; std::array< VkSubpassDependency, 2> dependencies; dependencies[0] = {}; dependencies[0].srcSubpass = VK_SUBPASS_EXTERNAL; dependencies[0].dstSubpass = 0; dependencies[0].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT; dependencies[0].srcAccessMask = 0; dependencies[0].dstStageMask = VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT; dependencies[0].dstAccessMask = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT | VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT; dependencies[1] = {}; dependencies[1].srcSubpass = VK_SUBPASS_EXTERNAL; dependencies[1].dstSubpass = 0; dependencies[1].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT; dependencies[1].srcAccessMask = 0; dependencies[1].dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT; dependencies[1].dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT; renderPassInfo = {}; renderPassInfo.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO; renderPassInfo.attachmentCount = colorComponents; renderPassInfo.dependencyCount = colorComponents; if (depthformat == VK_FORMAT_UNDEFINED) { dependencies[0] = dependencies[1]; } else { renderPassInfo.attachmentCount++; renderPassInfo.dependencyCount++; } renderPassInfo.pDependencies = dependencies.data(); colorAttachment[0] = {}; colorAttachment[0].format = colorformat; colorAttachment[0].samples = VK_SAMPLE_COUNT_1_BIT; colorAttachment[0].initialLayout = VK_IMAGE_LAYOUT_UNDEFINED; colorAttachment[0].loadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE; colorAttachment[0].storeOp = VK_ATTACHMENT_STORE_OP_STORE; colorAttachment[0].stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE; colorAttachment[0].stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE; colorAttachment[0].finalLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL; if (lastpass) colorAttachment[0].finalLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR; VkAttachmentReference colorAttachmentRef = {}; colorAttachmentRef.attachment = 0; colorAttachmentRef.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL; depthAttachment = {}; VkAttachmentReference depthAttachmentRef = {}; if (depthformat != VK_FORMAT_UNDEFINED) { colorAttachmentRef.attachment = 1; depthAttachment.format = depthformat; depthAttachment.samples = VK_SAMPLE_COUNT_1_BIT; depthAttachment.loadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE; depthAttachment.initialLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;// VK_IMAGE_LAYOUT_UNDEFINED; depthAttachment.storeOp = VK_ATTACHMENT_STORE_OP_STORE; depthAttachment.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE; depthAttachment.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE; depthAttachment.finalLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL; depthAttachmentRef.attachment = 0; depthAttachmentRef.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL; } colorAttachment[0].initialLayout = VK_IMAGE_LAYOUT_UNDEFINED; depthAttachment.initialLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;// VK_IMAGE_LAYOUT_UNDEFINED; subpasses.push_back( {} ); subpasses[0].pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS; subpasses[0].colorAttachmentCount = colorComponents; subpasses[0].pColorAttachments = &colorAttachmentRef; subpasses[0].pDepthStencilAttachment = NULL; if (depthformat != VK_FORMAT_UNDEFINED) subpasses[0].pDepthStencilAttachment = &depthAttachmentRef; VkAttachmentDescription attachments[2] = { colorAttachment[0], depthAttachment }; renderPassInfo.subpassCount = subpasses.size(); renderPassInfo.pAttachments = attachments; renderPassInfo.pSubpasses = subpasses.data(); VkAssert(vkCreateRenderPass(device->device, &renderPassInfo, nullptr, &pass)); return true; } This gives you an idea of just how many render passes I had to create in advance:
    // Initialize Render Passes shadowpass[0] = make_shared<RenderPass>(); shadowpass[0]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 0, true);//, CLEAR_DEPTH, -1); shadowpass[1] = make_shared<RenderPass>(); shadowpass[1]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 0, true, true, true, 0); if (MULTIPASS_CUBEMAP) { cubeshadowpass[0] = make_shared<RenderPass>(); cubeshadowpass[0]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 0, true, true, true, CLEAR_DEPTH, 6); cubeshadowpass[1] = make_shared<RenderPass>(); cubeshadowpass[1]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 0, true, true, true, 0, 6); } //shaderStages[0] = TEMPSHADER->shaderStages[0]; //shaderStages[4] = TEMPSHADER->shaderStages[4]; posteffectspass = make_shared<RenderPass>(); posteffectspass->InitializePostProcess(dynamic_pointer_cast<GPUDevice>(Self()), VK_FORMAT_UNDEFINED, 1, false); raytracingpass = make_shared<RenderPass>(); raytracingpass->InitializeRaytrace(dynamic_pointer_cast<GPUDevice>(Self())); lastposteffectspass = make_shared<RenderPass>(); lastposteffectspass->InitializeLastPostProcess(dynamic_pointer_cast<GPUDevice>(Self()), depthformat, 1, false); lastcameralastposteffectspass = make_shared<RenderPass>(); lastcameralastposteffectspass->InitializeLastPostProcess(dynamic_pointer_cast<GPUDevice>(Self()), depthformat, 1, true); { std::vector<VkFormat> colorformats = { __FramebufferColorFormat ,__FramebufferColorFormat, VK_FORMAT_R8G8B8A8_SNORM, VK_FORMAT_R32_SFLOAT }; for (int earlyZPass = 0; earlyZPass < 2; ++earlyZPass) { for (int clearflags = 0; clearflags < 4; ++clearflags) { renderpass[clearflags][earlyZPass] = make_shared<RenderPass>(); renderpass[clearflags][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 1, false, false, false, clearflags, 1, earlyZPass); renderpassRGBA16[clearflags][earlyZPass] = make_shared<RenderPass>(); renderpassRGBA16[clearflags][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), colorformats, depthformat, 4, false, false, false, clearflags, 1, earlyZPass); firstrenderpass[clearflags][earlyZPass] = make_shared<RenderPass>(); firstrenderpass[clearflags][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 1, false, true, false, clearflags, 1, earlyZPass); lastrenderpass[clearflags][earlyZPass] = make_shared<RenderPass>(); lastrenderpass[clearflags][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), { VK_FORMAT_UNDEFINED }, depthformat, 1, false, false, true, clearflags, 1, earlyZPass); //for (int d = 0; d < 2; ++d) { for (int n = 0; n < 5; ++n) { if (n == 2 or n == 3) continue; rendertotexturepass[clearflags][n][earlyZPass] = make_shared<RenderPass>(); rendertotexturepass[clearflags][n][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), colorformats, depthformat, n, true, false, false, clearflags, 1, earlyZPass); firstrendertotexturepass[clearflags][n][earlyZPass] = make_shared<RenderPass>(); firstrendertotexturepass[clearflags][n][earlyZPass]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), colorformats, depthformat, n, true, true, false, clearflags, 1, earlyZPass); // lastrendertotexturepass[clearflags][n] = make_shared<RenderPass>(); // lastrendertotexturepass[clearflags][n]->Initialize(dynamic_pointer_cast<GPUDevice>(Self()), depthformat, n, true, false, true, clearflags); } } } } } With dynamic rendering, you still have to fill in most of the same information, but you can just do it based on whatever the current state of things is, instead of looking for an object that hopefully matches the exact settings you want:
    VkRenderingInfoKHR renderinfo = {}; renderinfo.sType = VK_STRUCTURE_TYPE_RENDERING_INFO_KHR; renderinfo.renderArea = scissor; renderinfo.layerCount = 1; renderinfo.viewMask = 0; renderinfo.colorAttachmentCount = 1; targetbuffer->colorAttachmentInfo[0].imageLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL; targetbuffer->colorAttachmentInfo[0].clearValue.color.float32[0] = 0.0f; targetbuffer->colorAttachmentInfo[0].clearValue.color.float32[1] = 0.0f; targetbuffer->colorAttachmentInfo[0].clearValue.color.float32[2] = 0.0f; targetbuffer->colorAttachmentInfo[0].clearValue.color.float32[3] = 0.0f; targetbuffer->colorAttachmentInfo[0].imageView = targetbuffer->imageviews[0]; renderinfo.pColorAttachments = targetbuffer->colorAttachmentInfo.data(); targetbuffer->depthAttachmentInfo.clearValue.depthStencil.depth = 1.0f; targetbuffer->depthAttachmentInfo.clearValue.depthStencil.stencil = 0; targetbuffer->depthAttachmentInfo.imageLayout = VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL; renderinfo.pDepthAttachment = &targetbuffer->depthAttachmentInfo; device->vkCmdBeginRenderingKHR(cb->commandbuffer, &renderinfo); Then there is the way render passes effect the image layout state. With the TransitionImageLayout command, it is fairly easy to track the current state of the image layout, but render passes automatically switch the image layout after completion to a predefined state. Again, not impossible to handle, in and of itself, but when you add these things into the complexity of designing a full engine, things start to get ugly.
    void GPUCommandBuffer::EndRenderPass() { vkCmdEndRenderPass(commandbuffer); for (int k = 0; k < currentrenderpass->layers; ++k) { for (int n = 0; n < currentrenderpass->colorcomponents; ++n) { if (currentdrawbuffer->colortexture[n]) currentdrawbuffer->colortexture[n]->imagelayout[0][currentdrawbuffer->baseface + k] = currentrenderpass->colorAttachment[n].finalLayout; } if (currentdrawbuffer->depthtexture != NULL and currentrenderpass->depthcomponent == true) currentdrawbuffer->depthtexture->imagelayout[0][currentdrawbuffer->baseface + k] = currentrenderpass->depthAttachment.finalLayout; } currentdrawbuffer = NULL; currentrenderpass = NULL; } Another example where this was causing problems was with user-defined texture buffers. One beta tester wanted to implement some interesting effects that required rendering to some HDR color textures, but the system was so static it couldn't handle a user-defined color format in a texture buffer. Again, this is not impossible to overcome, but the practical outcome is I just didn't have enough time because resources are finite.
    It's interesting that this extension also removes the need to create a Vulkan framebuffer object. I guess that means you can just start rendering to any combination of textures you want, so long as they use a format that is renderable by the hardware. Vulkan certainly changes a lot of conceptions we had in OpenGL.
    So this extension does eliminate a significant source of problems for me, and I am happy it was implemented.
  13. Josh
    Leadwerks 5 is going to be developed alongside 4.5 with an extended period of beta testing and feedback.  My goal is to build the world's most advanced game design software, tailored to the needs of our community and clients.  Development will first focus on a new programming API updated for C++11 and then a completely new editor using Leadwerks GUI and based on the workflow developed with our current editor.
    The first beta will support the following features right away:
    Shared Pointers Unicode support 64-bit These features are already working.  Here is a working build you can try right now:
    Leadwerks5.zip
    This is the actual source code used to make this example.  Note that C++11 shared pointers have been implemented for all objects and the API has been simplified to make your code more readable:
    #include "Leadwerks.h" using namespace Leadwerks; int main(int argc, const char *argv[]) { auto window = CreateWindow(); auto context = CreateContext(window); auto world = CreateWorld(); auto camera = CreateCamera(world); camera->Move(0, 0, -3); auto light = CreateDirectionalLight(world); light->SetRotation(35, 35, 0); light->SetShadowMode(0); auto model = CreateBox(world); model->SetColor(0.0, 1.0, 0.92); while (not window->Closed()) { if (window->KeyHit(EscapeKey)) break; if (model) model->Turn(0, 1, 0); if (window->KeyHit(SpaceKey)) model = nullptr; world->Update(); world->Render(context); } return 0; } The beta will be available to developers who want to try the very latest game development technology, with a subscription plan of just $4.99 a month, available through our website.
  14. Josh
    Previously I described how multiple cameras can be combined in the new renderer to create an unlimited depth buffer. That discussion lead into multi-world rendering and 2D drawing. Surprisingly, there is a lot of overlap in these features, and it makes sense to solve all of it at one time.
    Old 2D rendering systems are designed around the idea of storing a hierarchy of state changes. The renderer would crawl through the hierarchy and perform commands as it went along, rendering all 2D elements in the order they should appear. It made sense for the design of the first graphics cards, but this style of rendering is really inefficient on modern graphics hardware. Today's hardware works best with batches of objects, using the depth buffer to handle which object appears on top. We don't sort 3D objects back-to-front because it would be monstrously inefficient, so why should 2D graphics be any different?
    We can get much better results if we use the same fast rendering techniques we use for 3D graphics and apply it to 2D shapes. After all, the only difference between 3D and 2D rendering is the shape of the camera projection matrix. For this reason, Turbo Engine will use 2D-in-3D rendering for all 2D drawing. You can render a pure 2D scene by setting the camera projection mode to orthographic, or you can create a second orthographic camera and render it on top of your 3D scene. This has two big implications:
    Performance will be incredibly fast. I predict 100,000 uniquely textured sprites will render pretty much instantaneously. In fact anyone making a 2D PC game who is having trouble with performance will be interested in using Turbo Engine. Advanced 3D effects will be possible that we aren't used to seeing in 2D. For example, lighting works with 2D rendering with no problems, as you can see below. Mixing of 3D and 2D elements will be possible to make inventory systems and other UI items. Particles and other objects can be incorporated into the 2D display.
    The big difference you will need to adjust to is there are no 2D drawing commands. Instead you have persistent objects that use the same system as the 3D rendering.
    Sprites
    The primary 2D element you will work with is the Sprite entity, which works the same as the 3D sprites in Leadwerks 4. Instead of drawing rectangles in the order you want them to appear, you will use the Z position of each entity and let the depth buffer take care of the rest, just like we do with 3D rendering. I also am adding support for animation frames and other features, and these can be used with 2D or 3D rendering.

    Rotation and scaling of sprites is of course trivial. You could even use effects like distance fog! Add a vector joint to each entity to lock the Z axis in the same direction and Newton will transform into a nice 2D physics system.
    Camera Setup
    By default, with a zoom value of 1.0 an orthographic camera maps so that one meter in the world equals one screen pixel. We can position the camera so that world coordinates match screen coordinates, as shown in the image below.
    auto camera = CreateCamera(world); camera->SetProjectionMode(PROJECTION_ORTHOGRAPHIC); camera->SetRange(-1,1); iVec2 screensize = framebuffer->GetSize(); camera->SetPosition(screensize.x * 0.5, -screensize.y * 0.5); Note that unlike screen coordinates in Leadwerks 4, world coordinates point up in the positive direction.

    We can create a sprite and reset its center point to the upper left hand corner of the square like so:
    auto sprite = CreateSprite(world); sprite->mesh->Translate(0.5,-0.5,0); sprite->mesh->Finalize(); sprite->UpdateBounds(); And then we can position the sprite in the upper left-hand corner of the screen and scale it:
    sprite->SetColor(1,0,0); sprite->SetScale(200,50); sprite->SetPosition(10,-10,0);
    This would result in an image something like this, with precise alignment of screen pixels:

    Here's an idea: Remember the opening sequence in Super Metroid on SNES, when the entire world starts tilting back and forth? You could easily do that just by rotating the camera a bit.
    Displaying Text
    Instead of drawing text with a command, you will create a text model. This is a series of rectangles of the correct size with their texture coordinates set to display a letter for each rectangle. You can include a line return character in the text, and it will create a block of multiple lines of text in one object. (I may add support for text made out of polygons at a later time, but it's not a priority right now.)
    shared_ptr<Model> CreateText(shared_ptr<World> world, shared_ptr<Font> font, const std::wstring& text, const int size) The resulting model will have a material with the rasterized text applied to it, shown below with alpha blending disabled so you can see the mesh background. Texture coordinates are used to select each letter, so the font only has to be rasterized once for each size it is used at:

    Every piece of text you display needs to have a model created for it. If you are displaying the framerate or something else that changes frequently, then it makes sense to create a cache of models you use so your game isn't constantly creating new objects. If you wanted, you could modify the vertex colors of a text model to highlight a single word.

    And of course all kinds of spatial transformations are easily achieved.

    Because the text is just a single textured mesh, it will render very fast. This is a big improvement over the DrawText() command in Leadwerks 4, which performs one draw call for each character.
    The font loading command no longer accepts a size. You load the font once and a new image will be rasterized for each text size the engine requests internally:
    auto font = LoadFont("arial.ttf"); auto text = CreateText(foreground, font, "Hello, how are you today?", 18); Combining 2D and 3D
    By using two separate worlds we can control which items the 3D camera draws and which item 2D camera draws: (The foreground camera will be rendered on top of the perspective camera, since it is created after it.) We need to use a second camera so that 2D elements are rendered in a second pass with a fresh new depth buffer.
    //Create main world and camera auto world = CreateWorld(); auto camera = CreateCamera(world); auto scene = LoadScene(world,"start.map"); //Create world for 2D rendering auto foreground = CreateWorld() auto fgcam = CreateCamera(foreground); fgcam->SetProjection(PROJECTION_ORTHOGRAPHIC); fgcam->SetClearMode(CLEAR_DEPTH); fgcam->SetRange(-1,1); auto UI = LoadScene(foreground,"UI.map"); //Combine rendering world->Combine(foreground); while (true) { world->Update(); world->Render(framebuffer); } Overall, this will take more work to set up and get started with than the simple 2D drawing in Leadwerks 4, but the performance and additional control you get are well worth it. This whole approach makes so much sense to me, and I think it will lead to some really cool possibilities.
    As I have explained elsewhere, performance has replaced ease of use as my primary design goal. I like the results I get with this approach because I feel the design decisions are less subjective.
  15. Josh
    My productivity has not been good lately because I am agonizing over where to live. I don't like where I am right now. Decisions like this were easy in college because I would just go where the best program was for what I was interested in. It's more difficult now to know if I am making the right choice.
     
    I can "go big" and move to San Francisco, which weirdly has become the capitol of tech. I lived there before and know all the craziness to expect. I used to work at 24th and Mission (before Leadwerks existed) which was and still is a dangerous neighborhood, but if you go one block north there's now an Alamo Drafthouse where drug dealers used to offer me crack cocaine. (I was gentrifying the Mission before it was cool.) San Francisco is a stressful place full of insecure nasty people but there's also a ton of stuff to do and it's full of world-class tech talent and a lot of interesting cool people from all walks of life.
     
    The other option is to move out to the midwest somewhere and buy a house for cheap, and just chill out and work. I won't have to spend time fighting with insane traffic, trying to get groceries, and dodging hobo poop on the sidewalk.
     
    At the same time, Leadwerks has reached a level of success where I can easily get the attention of people in the game industry I previously did not have access to. The 10,000 user announcement (set to hit 20,000 this summer) was a big deal.
     
    Basically, I need to get relocated in the next two weeks, open a new office, and get back to work. I'm having a hard time deciding this.
  16. Josh
    I've been following Eric's progress with this idea since last spring, and it's really great to see what started as a fuzzy concept turn into reality.
     

    http://www.youtube.com/watch?v=pK_yjZpOs6w
     
    About Hacker Lab
    We believe that technology can change the world and the starting point is education. Hacker Lab aims to educate folks and seed startups with community driven resources. Collectively we can build a brighter future using lean methods in both education and business.
     
    We Provide
    Community Mentorship For Startups
    Educational Courses For Both Adults & Youth
    Networking Events
    Industry Events
    Workspace for Students and Professionals in the Software and Hardware space including Design & Business Development.

     
    Here's the local media coverage of the event:
    http://gooddaysacram...740-hacker-lab/
     
    I'll be teaching a class on OpenGL on October 24th. Check the site for details.
     
    www.hackerlab.org
     

     

  17. Josh
    Previously, I laid out a pretty complete design of the racing game template I want to build. There's definitely enough there to build out the concept with very little left unanswered.
     
    We are going to have a marble game template, because of its simplicity and ease of learning. However, unless the template really looks like a game people would want to play, it doesn't offer enough "carrot" to inspire people. This idea is less well-defined than the racing game, so I am only in the idea stage right now. There's two big aspects of this I want to figure out: level design and game mechanics.
    Level Design
    If you look at a lot of this type of game on Steam, they pretty much all just use some random blocks or generic levels and place a ball in it. This is what I do not want:
     

     
    A game of this type should have a highly ordered repeating geometry.
     

     

     
    These examples are more orthogonal than I would like, but it's a start. I definitely want the base of it to be a checkered pattern. I do not want random blocks. See how the Sonic level design uses some trim along the edges to make it look nicer?
     
    Here's an example of some more organic elements mixed into the design:
     

     
    Here's a nice curve. Not exactly what I want, but it provides some ideas:
     

     
    I like the curving plastic pieces here:
     

     
    It should look like a toy kit you put together. The wooden elements are also pretty cool.
     
    So here's my quick summary of what I want in the level design:
    Mostly orthogonal and diagonal brushes with a checkboard pattern. Lush vegetation to break up edges and make the world less boxy. Curved pre-formed pieces that look like plastic and wooden toys.  
    Overall I want the world to feel like an old Nintendo fantasy world with an adventurous feel. The music might not be 8-bit, but should have some of that style:
    http://www.playonloop.com/2017-music-loops/doctor-gadget/
     
    The marble itself should have a distinctive look. Maybe a reflective silver ball-bearing, using the SSG effect to reflect the world around it?
    Here are some concept textures.
    +




    Mechanics
    I am pretty foggy on this because I don't play this type of game. Fans? Moving platforms? Loop-de-loops? Got any suggestions or examples from other games you would like to see me implement? If you have a video of gameplay, that's even better.
    The idea is to implement a few reusable mechanics that can be recombined in interesting ways, so that someone can make a finished game just by recombining them and making additional levels. I also favor mechanics that are associated with a specific model so that they are easily recognizable, like fans, magnets, saws, etc.
    Tell me your ideas in the comments below.
  18. Josh
    Leadwerks 4.x will see a few more releases, each remaining backwards compatible with previous versions.  Leadwerks 5.0 is the point where we will introduce changes that break compatibility with previous versions.  The changes will support a faster design that relies more on multi-threading, updates to take advantage of C++11 features, and better support for non-English speaking users worldwide.  Changes are limited to code; all asset files including models, maps, shaders, textures, and materials will continue to be backwards-compatible.
    Let's take a look at some of the new stuff.
    Shared Pointers
    C++11 shared pointers eliminate the need for manual reference counting.  Using the auto keyword will make it easier to update Leadwerks 4 code when version 5 arrives.  You can read more about the use of shared pointers in Leadwerks 5 here.
    Unicode Support
    To support the entire world's languages, Leadwerks 5 will make use of Unicode everywhere.  All std::string variables will be replaced by std::wstring.  Lua will be updated to the latest version 5.3.  This is not compatible with LuaJIT, but the main developer has left the LuaJIT project and it is time to move on.  Script execution time is not a bottleneck, Leadwerks 5 gains a much longer window of time for your game code to run, and I don't recommend people build complex VR games in Lua.  So I think it is time to update.
    Elimination of Bound Globals
    To assist with multithreaded programming, I am leaning towards a stateless design with all commands like World::GetCurrent() removed.  An entity needs to be explicitly told which world it belongs to upon creation, or it must be created as a child of another entity:
    auto entity = Pivot::Create(world); I am considering encapsulating all global variables into a GameEngine object:
    class GameEngine { public: std::map<std::string, std::weak_ptr<Asset> > loadedassets; shared_ptr<GraphicsEngine> graphicsengine; shared_ptr<PhysicsEngine> physicsengine; shared_ptr<NetworkEngine> networkengine; shared_ptr<SoundEngine> soundengine; shared_ptr<ScriptEngine> scriptengine;//replaces Interpreter class }; A world would need the GameEngine object supplied upon creation:
    auto gameengine = GameEngine::Create(); auto world = World::Create(gameengine); When the GameEngine object goes out of scope, the entire game gets cleaned up and everything is deleted, leaving nothing in memory.
    New SurfaceBuilder Class
    To improve efficiency in Leadwerks 5, surfaces will no longer be stored in system memory, and surfaces cannot be modified once they are created.  If you need a modifiable surface, you can create a SurfaceBuilder object.
    auto builder = SurfaceBuilder::Create(gameengine); builder->AddVertex(0,0,0); builder->AddVertex(0,0,1); builder->AddVertex(0,1,1); builder->AddTriangle(0,1,2); auto surface = model->AddSurface(builder); When a model is first loaded, before it is sent to the rendering thread for drawing, you can access the builder object that is loaded for each surface:
    auto model = Model::Load("Models\box.mdl", Asset::Unique); for (int i=0; i<model->CountSurfaces(); ++i) { auto surface = model->GetSurface(i); shared_ptr<SurfaceBuilder> builder = surface->GetBuilder(); if (builder != nullptr) { for (int v=0; v < surface->builder->CountVertices(); ++v) { Vec3 v = builder->GetVertexPosition(v); } } } 98% of the time there is no reason to keep vertex and triangle data in system memory.  For special cases, the SurfaceBuilder class does the job, and includes functions that were previously found in the Surface class like UpdateNormals().  This will prevent surfaces from being modified by the user when they are in use in the rendering thread.
    A TextureBuilder class will be used internally when loading textures and will operate in a similar manner.  Pixel data will be retained in system memory until the first render.  These classes have the effect of keeping all OpenGL (or other graphics API) code contained inside the rendering thread, which leads to our next new feature...
    Asynchronous Loading
    Because surfaces and textures defer all GPU calls to the rendering thread, there is no reason we can't safely load these assets on another thread.  The LoadASync function will simply return true or false depending on whether the file was able to be opened:
    bool result = Model::LoadASync("Models\box.mdl"); The result of the load will be given in an event:
    while (gameengine->eventqueue->Peek()) { auto event = gameengine->eventqueue->Wait(); if (event.id == Event::AsyncLoadResult) { if (event.extra->GetClass() == Object::ModelClass) { auto model = static_cast<Model>(event.source.get()); } } } Thank goodness for shared pointers, or this would be very difficult to keep track of!
    Asynchronous loading of maps is a little more complicated, but with proper encapsulation I think we can do it.  The script interpreter will get a mutex that is locked whenever a Lua script executes so scripts can be run from separate threads:
    gameengine->scriptengine->execmutex->Lock(); //Do Lua stuff gameengine->scriptengine->execmutex->Unlock(); This allows you to easily do things like make an animated loading screen.  The code for this would look something like below:
    Map::LoadASync("Maps/start.map", world); while (true) { while (EventQueue::Peek()) { auto event = EventQueue::Wait(); if (event.id == Event::AsyncLoadResult) { break; } } loadsprite->Turn(0,0,1); world->Render();// Context::Sync() might be done automatically here, not sure yet... } All of this will look pretty familiar to you, but with the features of C++11 and the new design of Leadwerks 5 it opens up a lot of exciting possibilities.
  19. Josh
    Previously, I described the goals and philosophy that were guiding my design of our implementation of the Leadwerks Workshop on Steam. To review, the goals were:
    1. Frictionless sharing of items within the community.
    2. Protection of intellectual property rights.
    3. Tracking of the chain-of-authorship and support for derivative works.
     
    In this update I will talk more specifically about how our implementation meets these goals.
     
    Our implementation of the Steam Workshop allows Leadwerks developers to publish game assets directly to Steam. A Workshop item is typically a pack of similar files, like a model or texture pack, rather than single files:

     
    To add an item to Leadwerks, simply hit the "Subscribe" button in Steam and the item will become available in a new section of the asset browser:

     
    You can drag Workshop files into your scene and use them, just like a regular file. However, the user never needs to worry about managing these files; All subscribed items are available in the editor, no matter what project you are working on. When a file is used in a map or applied to a model, a unique global ID for that file is saved, rather than a file path. This allows the item author to continue updating and improving the file without ever having to re-download files, extract zip archives, or any other mess. Effectively, we are bringing the convenience of Steam's updating system to our community, so that you can work together more effectively. Here's one of the tutorial maps using materials from a sci-fi texture pack from the Workshop. When the map is saved, the unique file IDs are stored so I can share the map with others.

     
    Publishing your own Workshop packages is easy. A built-in dialog allows you to set a title, description, and a preview image. You can add additional images and even videos to your item in Steam:

     
    Leadwerks even has support for derivative works. You can create a model, prefab, or map that uses another Workshop file and publish it to Steam. Since Leadwerks tracks the original ID of any Workshop items you used, they will always be pulled from the original source. This allows an entirely new level of content authors to add value to items downstream from their origin, in a way similar to how Linux distributions have grown and evolved. For example, maybe you don't have the artistic skill to make every single texture you need for a house, but you can put together a pretty nice house model and pant it with another user's textures. You can then upload that model right back to the Workshop, without "ripping off" the texture artist; their original package will still be needed to load the textures. It's perfectly fine to change the name of your Workshop package at any time, and you never need to worry about your file names conflicting with files in other packages. (If you decide you want to change a lot of file names, it's best to just create a new package so that you don't interrupt the work of users "downstream" from you,)
     
    Uninstalling a Workshop package just requires you to hit the "unsubscribe" button on the item's page in the Steam Workshop. No more hunting around for stray zip files! You can easily check out other users' work, use whatever you like, and unsubscribe from the packages you don't like, with no mess at all.
     
    How Do I Get It?
    The Leadwerks Workshop beta begins today. You must be a member of the Leadwerks Developer group on Steam to access the Workshop. A limited number of beta invites are being sent out. Once the system is completely polished, we will make it available to the entire Leadwerks community.
  20. Josh
    I'm really happy to see all the great and scary games coming out for the Halloween Game Tournament. I have something of my own to share with you.
     
    The Leadwerks 3 vegetation system is a groundbreaking new technology. All vegetation instances are procedural and dynamically placed, so that physics and rendering of any amount of instances can occur with zero memory usage. This demo proves that the idea actually works!
     
    This demo shows the new Leadwerks Game Engine 3 vegetation system in action. It uses three vegetation layers for trees, brushes, and dense grass. Without any adjustment of vegetation settings for different hardware, I am getting the following results:
    NVidia GEForce GTX 780: 301 FPS
    AMD Radeon R9 290: 142 FPS
    Intel HD Graphics 400: 15 FPS

     
    The view distances can be further adjusted to make the system run faster on low-end hardware.
     

  21. Josh
    This is the final output of Leadwerks 3 on Windows.
     
    When you choose "Windows" in the Publish dialog, a few options appear. Press OK and your game's installer is created.

     
    The installer looks like this. You can replace the installer images with your own if you like:



  22. Josh
    Last week we launched our Steam Greenlight campaign to get Leadwerks into the hands of the Steam community. This week, we're rolling out the second stage of our plan with a Kickstarter campaign to bring Leadwerks to Linux. This will let you build and play games, without ever leaving Linux. The result of this campaign will be Leadwerks 3.1 with a high-end AAA renderer running on Linux, Mac, and Windows, with an estimated release date before Christmas.
     
    Valve has given Linux users a taste of PC gaming, so now it's up to us to reach the Linux community with our message. If you dig this, please help spread the word that someone is trying to put game development on Linux:
    http://www.kickstarter.com/projects/1937035674/leadwerks-build-linux-games-on-linux
     



     
    Leadwerks for Linux
    Linux is a solid and secure operating system that’s perfect for gaming, but at this time Windows remains the lead platform for PC games. We want to change that by putting the game development process right on Linux, with Leadwerks for Linux. This will allow you to build and play games without ever leaving the Linux operating system.
     
    Leadwerks is a visual tool for building any kind of 3D game, including dungeon crawlers, first-person shooters, and side-scrollers.. We want to put game development on Linux with Leadwerks for Linux. Our campaign has three goals:
     
    Linux Game Development. On Linux.
    It’s not enough just to export games to Linux. We want to put the game development process on Linux, so you can build and play games, without ever leaving the Linux operating system. We have a complete visual editor that handles all aspects of the game development process, and we’re porting it to run natively on Linux. We’re using GTK for the user interface, so our editor will look and feel like a native Linux application.
     
    We're targeting Ubuntu 12.04 to start with, and will support other distros as we make progress. You'll also be able to compile games for Windows and Mac...if you feel like sharing.
     

     
    Expand the Linux Library of Games
    Our second goal is to facilitate expansion of the Linux library of games, and encourage the production of Linux-exclusive titles. The Linux community is pretty intelligent, and they have a lot of good programmers. We think by putting the appropriate tools in their hands, it will enable them to make great Linux games.
     

    Hoodwink by E-One Studio
     
    AAA Graphics on Linux
    Leadwerks is known for having great graphics. We want to push Linux graphics beyond anything that’s ever been done. Linux is the perfect platform for triple-A graphics, because it has OpenGL performance faster than Windows or Mac. We’re taking advantage of this performance with deferred lighting, hardware tessellation, and up to 32x multisample antialiasing.
     

    The Zone by Dave Lee
     
    Steam Integration
    When Valve announced Steam was coming to Linux, that was a clear sign to us that Linux is ready for PC gaming. We’re working to integrate Leadwerks with Steam and take advantage of new features Steam offers for developers.
     

     
    Steam Workshop
    We’re hooking into the Steam Workshop to deliver game assets. This includes models, textures, scripts, and maps, so you can get everything you need to make games. When you find an object in the Steam Workshop you want to use in your game, just hit the “Subscribe” button and it will show up right away, ready to use in Leadwerks. We’re also adding support for Valve’s asset formats, so you can access lots of great content from the rest of the Steam Workshop, and add it to your game.
     
    Export for Steam
    We’re working with the Steam SDK to make it easier to submit Linux games to Greenlight. Just press a button, and your game files will be packaged up, ready to send to Steam.[/color]
     
    Features
    Leadwerks is a powerful yet easy to use game engine with thousands of users worldwide. Here are just a few of the main reasons we think Linux users will love Leadwerks.
     
    C++ Programming
    Programming with Leadwerks is a breeze. Underneath our visual editor lies a powerful yet easy to use programming API that can be accessed in C++, Lua, and other languages. With documentation and examples for every single command, you’ve got everything you need to make any kind of game.
     
    Visual Scripting
    For scripting, we use the Lua script language, just like in Crysis, World of Warcraft, and hundreds of other games. We’ve got a built-in script editor, so you don’t have to switch back and forth between Leadwerks and an external editor. It’s even got a built-in debugger so you can step through your script and see everything that’s going on in the game. The flowgraph editor is used to connect scripted objects and make gameplay happen. This lets map designers set up sequences of events and complex gameplay, with no programming required.
     
    Constructive Solid Geometry
    Finally, we use a level editor based on constructive solid geometry. This lets everyone make game levels, without having to be an expert. If you’re familiar with Valve’s Hammer Editor, you’ll feel right at home in Leadwerks.
     

    Combat Helo by Tricubic Studios
     
    We plan to deliver a visual editor that handles every aspect of the game development process, a powerful yet easy to use programming API, with triple-A graphics, all running natively in Linux. By working with Steam and the Linux community, our goal is to make Linux the number one platform for PC gaming. Thank you for helping us take Linux gaming to the next level.
     

    Big Five Game Hunter by Unidev
     
    Risks and challenges
    We expect to encounter some graphics driver bugs. This is always the case when you are pushing advanced graphics. Fortunately, we have good relationships with the major graphics hardware vendors, and have been able to get driver bugs fixed on other platforms in the past. Valve Software has done some of the heavy lifting for us here, by prompting the graphics hardware vendors to get their drivers in good shape.
     
    Our GUI has a GTK implementation for Linux, but we expect to encounter some problems that have to be overcome. Our GTK Scintilla implementation (for the code editor) has not been written, and it's a complex library.
     
    Since the Linux file system is case-sensitive, we expect to have to modify some code to work properly on Linux.
     
    We're implementing a new method for terrain layers using virtual texturing. We do not anticipate any problems here, but it is one of the few features we haven't fully prototyped.
     
    Although building Leadwerks for Linux will undoubtedly present some difficult problems, our team has a lot of experience with multi-platform development and I'm confident we can deal with all issues we encounter.
  23. Josh
    PBR materials look nice, but their reflections are only as good as the reflection data you have. Typically this is done with hand-placed environment probes that take a long time to lay out, and display a lot of visual artifacts. Nvidia's RTX raytracing technology is interesting, but it struggles to run old games on a super-expensive GPU, My goal in Leadwerks 5 is to have automatic reflections and global illumination that doesn't require any manual setup, with fast performance.
    I'm on the final step of integrating our voxel raytracing data into the standard lighting shader and the results are fantastic. I found I could compress the 3D textures in BC3 format in real-time and save a ton of memory that way. However, I discovered that only about 1% of the 3D voxel texture actually has any data in it! That means there could be a lot of room for improvement with a sparse voxel octree texture of some sort, which could allow greater resolution. In any case, the remaining implementation of this feature will be very interesting. (I believe the green area on the back wall is an artifact caused by the BC3 compression.)
    I think I can probably render the raytracing component of the scene in a separate smaller buffer and the denoise it like I did with SSAO to make the performance hit negligible on this. Another interesting thing is that the raytracing automatically creates it's own ambient occlusion effect.
    Here is the current state, showing the raytraced component only. It works great with our glass refraction effects.
    Next I will start blending it into the PBR material lighting calculation a little better.
    Here's an updated video that shows it worked into the lighting more:
     
  24. Josh
    Jorn Theunissen has been hired on a part-time basis to help design a new set of documentation for Leadwerks. We're taking a top-down approach whereby he designs an outline that encompasses everything you need to know, assuming the reader starts with zero knowledge. The goal is to explain everything in one set of information, down to the level of describing what a mipmap actually is.
     
    The organization of the material will be a linear list of lessons, with no nesting or sub-sections. This makes it easy to track your progress as you go through the material, and it's very clear where information is, with big thimbnails and descriptions for each chapter. Here is a mock-up of what the organization might look like:
     

     
    Benjamin Rössig has also been hired on a part-time basis to work on various features I don't have time to. He has contributed to Leadwerks in the past with the original program update system from Leadwerks 2, some of the Steamworks API calls, and wrote the Scintilla text editor. He is well-versed in C++, 3D graphics, and BlitzMax, which made this an easy choice.
     
    Plans for adding paid items to the Workshop are underway and I am working with about 20 third-party artists to bring their content to the Leadwerks Workshop. If you are interested in selling your game items in the Leadwerks Workshop and want to get in before the launch, please contact me.
     
    Leadwerks Game Player is in testing. There are various challenges as some of this technology is very new, but we'll keep iterating on this, improving it, and come up with a polished result for the end user. There is a major event coming in June that will give your Leadwerks Lua games major exposure. If you have a game you want to publish in the Game Player please contact me.
     
    Finally, Leadwerks 3.5 is schedule for release at the end of April. This will include CSG boolean operations, scene filters, groups, a new game project template, and possibly some features Benjamin is researching now.
  25. Josh
    An update is available on the beta branch on Steam. This only updates the compiled executables for Lua, only on Windows.

    Optimization
    I've rewritten the way lights and objects affect each other. In the old mobile renderer it was necessary to store a list of lights that affect each entity, because it was using a forward renderer and had to send the information for the nearest four lights to the object's shader to calculate lighting. This was a complicated task and is not needed with a deferred renderer. 
    The C++ PlayAnimation routine had a bug that would cause the animation to keep updating invisibly, causing shadows to continue rendering when they did not need to be. This is fixed.
     
    I also fixed some inefficiency that would affect point and spot lights, even if their shadows were not being updated.
     
    Finally, I set it so that in medium and low quality lighting modes, only one shadow will be updated per frame, unless the player is close to the shadow volume. This allows you to have many lights updating but only one gets their shadowmap redrawn each frame. This can cause some small visual glitches but the high quality setting (World:SetLightQuality(2)) can be used to always render everything.
     
    Performance Benchmarks (4.3 beta vs 4.1/4.2)
    One more Day: 15% faster
    A Demon's Game: 17% faster
    The Garden: 20% faster
    Vectronic: 23% faster
    AI and Events map: 200% faster

     
    This build should be considered unstable. It will crash on some games.
×
×
  • Create New...