Jump to content

Josh

Staff
  • Posts

    23,353
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    This is a good time to write about some very broad changes I expect to come about over the next year in our community as our new engine "Turbo" arrives. Turbo Game Engine, as the name suggests, offers really fast performance using a groundbreaking Vulkan-based renderer, which is relevant to everyone but particularly beneficial for VR developers who struggle to keep their framerates up using conventional game engines. I want to help get you onboard with some of the ideas that I myself am processing.
    Less emphasis on how-to tutorials, more emphasis on API documentation
    The new engine assumes you are either an artist or a programmer, and if you are a programmer you already know basic C++ or Lua. More attention will be paid to precisely documenting how commands behave. There will be a more strict division between supported and unsupported features. There will be less "guessing" what the user is trying to do, and more formal documentation saying "if you do X then Y will occur". For example, every entity creation function requires the world object be explicitly supplied in the creation command, instead of hiding this away in a global state. There will not be tutorials explaining what a variable is or teaching basic programming concepts.
    More responsiveness to user requests, especially for programming features
    Leadwerks 4 features have been in a semi-frozen state for a while now. Although many new features have been added, I have not wanted to create breaking changes, and have been reluctant to introduce things that might create new bugs, because I knew an entire new infrastructure for future development was on the way. With the new engine I will be more receptive to suggestions that make the engine better. One example would be an animation events system that lets users set a point in an animation where an event is called. These changes need to be implemented within the design philosophy of the new engine. For example, I would use an Actor class method to call the event function rather than a raw pointer. Emphasis should be placed on what is practical and useful for competent programmers and artists, and how everything fits into the overall design.
    Less attempts at hand-holding for new developers
    The new engine will not attempt to teach children to make their own MMORPG. Our marketing materials will not even suggest this is possible. The new engine will deliver performance faster than any other game engine in the world, period. Consequently, I think the community will gain a lot more advanced users, and though some of them will not even interact on the forum I do think you will see more organic creativity and quality. In its own way, the new engine actually is quite a lot easier to work with, but the sales pitch is not going to emphasize that, it will just be something people discover as they use it. I love seeing all the weird and cool creations that comes from people who are completely new to game development, but those people were new to game development and did well with Leadwerks had a lot of natural talent. Instead of trying to come up with a magic combination of features and tutorials to turn novices into John Carmack, we are going to rely on the product benefits to draw them and expect them to get up to speed quickly. Discussions should be about what is best for intermediate / experts, not trying to figure out what beginners want. Ease of use is subjective and I feel we have hit the point of diminishing returns chasing after this. If beginners want to jump in and learn that is great, but it is not our reason for existing.
    Stronger focus on the core essentials
    At the time of this writing, there are only eight entity types in the beta of the new engine. We can't win based on number of features, but we can do the core essentials much better than anyone else. Our new Vulkan renderer offers performance that developers (especially VR) can't live without. Models, lights, and rendering are the core features I want to focus on, and these can be expanded by the end user to create their own. For example, a custom particle system with support for all kinds of behaviors could easily be created with the model class and a few custom shaders, without breaking the performance that makes this engine valuable. Our new technology is very well thought out and will give us a stable base for a long time. I am planning on a plugin / extensions system because its best for this to be integrated in the core design, but you should not expect this to be very useful for a couple of years. Plugin systems require huge network effects to offer anything valuable. We can only reach that type of scale by offering something else unique that no one can match us on. Fortunately, we have something. It's right in the name.
    More formal support for good standards
    Vulkan has turned out to be a very good move. I don’t think anyone realizes how big a deal GLTF support is yet: you can download thousands of models from Sketchfab and other sources and load them right now with no adjustments. I may join the Khronos consortium and I have some ideas for additional useful GLTF extensions. I'm using JSON for a lot of files and it's great. DDS will be our main texture file format. There are more good standards today than there were ten years ago, and I will adopt the ones that fit our goals.
    Different type of new user appearing
    With Leadwerks, the average new user appears on the forum and says “hey, I want to make a game but I don’t really know how, please tell me what I need to know.” With the new engine I think it will be more like “hey, I’m more or less an expert already, I know exactly what I want to make, please tell me what I need to know.” I expect them to have less tolerance for bugs, undefined behavior, or undocumented features, and at the same time I think it will be easier to have frank discussions about exactly what developers need.
    In very general terms that is how I want to focus things. I think everyone here will adjust to this more strict and well-defined approach and end up liking it a lot better.
  2. Josh
    Leadwerks 5 uses a different engine architecture with a game loop that runs at either 30 (default) or 60 updates per second. Frames are passed to the rendering thread, which runs at an independent framerate that can be set to 60, 90, or unlimited. This is great for performance but there are some challenges in timing. In order to smooth out the motion of the frames, the results of the last two frames received are interpolated between. Animation is a big challenge for this. There could potentially be many, many bones, and interpolating entire skeletons could slow down the renderer.
    In the screen capture below, I have slowed the game update loop down to 5 updates per second to exaggerate the problem that occurs when no interpolation is used:

    My solution was to upload the 4x4 matrices of the previous two frames and perform the tweening inside the vertex shader:
    //Vertex Skinning mat4 animmatrix[8]; for (int n=0; n<4; ++n) { if (vertex_boneweights[n] > 0.0f) { animmatrix[n] = GetAnimationMatrix(vertex_boneindices[n],0); animmatrix[n + 4] = GetAnimationMatrix(vertex_boneindices[n],1); } } vec4 vertexpos = vec4(vertex_position,1.0f); vec4 modelvertexposition; for (int n=0; n<4; ++n) { if (vertex_boneweights[n] > 0.0f) { modelvertexposition += animmatrix[n] * vertexpos * vertex_boneweights[n] * rendertweening + animmatrix[n+4] * vertexpos * vertex_boneweights[n] * (1.0f - rendertweening); } } modelvertexposition = entitymatrix * modelvertexposition; Bone matrices are retrieved from an RGBA floating point texture with this function:
    mat4 GetAnimationMatrix(const in int index, const in int frame) { ivec2 coord = ivec2(index * 4, gl_InstanceID * 2 + frame); mat4 bonematrix; bonematrix[0] = texelFetch(texture14, coord, 0); bonematrix[1] = texelFetch(texture14, coord + ivec2(1,0), 0); //bonematrix[2] = texelFetch(texture14, coord + ivec2(2,0), 0); bonematrix[2].xyz = cross(bonematrix[0].xyz,bonematrix[1].xyz); //removes one texture lookup! bonematrix[2].w = 0.0f; bonematrix[3] = texelFetch(texture14, coord + ivec2(3,0), 0); return bonematrix; } This slows down the shader because up to 24 texel fetches might be performed per vertex, but it saves the CPU from having to precompute interpolated matrices for each bone. In VR, I think this cost savings is critical. Doing a linear interpolation between vertex positions is not exactly correct, but it's a lot faster than slerping a lot of quaternions and converting them to matrices, and the results are so close you can't tell any difference.
    There's actually a similar concept in 2D animation I remember reading about.when I was a kid. The book is called The Illusion of Life: Disney Animation and it's a really interesting read with lots of nice illustrations.

    Here is the same scene with interpolation enabled. It's recorded at 15 FPS so the screen capture still looks a little jittery, but you get the idea: Adding interpolation brought this scene down to 130 FPS from 200 on an Intel chip, simply because of the increased number of texel fetches in the vertex shader. Each character consists of about 4000 vertices. I expect on a discrete card this would be running at pretty much the max framerate (1000 or so).

    With this in place, I can now confirm that my idea for the fast rendering architecture in Leadwerks Game Engine 5 definitely works.
    The next step will be to calculate animations on a separate thread (or maybe two). My test scene here is using a single skeleton shared by all characters, but putting the animation on its own thread will allow many more characters to all be animated uniquely.
  3. Josh
    Leadwerks3D has built-in project management features that assist the user to create, share, and publish projects. When you first start editor for the first time, the New Project wizard is displayed. You can select a programming language and choose which platforms you want the project to support. It is possible to add project templates for new languages, too.

     
    Once a project exists, you can go into the project manager and switch projects. This will cause the editor to use the project directory as the path to load all assets from. All files will be displayed relative to the project directory. You can drag an image into the icon box to add it to the project. (Xcode users will probably notice the influence here.) The +/- buttons will allow you to import a project or remove one from the list, with an option to delete the directory (iTunes much?).

     
    I want to add more features like exporting projects into a folder or .zip file, or maybe a .werk project package the editor recognizes. The editor will have options to skip asset source files like .psd, .max, etc., and only copies relevant files, so all the hundreds of files the C++ compilers generate will be skipped. We might eventually even add integration with SVN repositories, backup systems, and team management features, but that's beyond what we need to worry about right now.
     
    A "project" in Leadwerks3D is a game or other program, and will typically contain many maps (scenes). Since this kind of defies a conventional design around file requestors and files and folders, we might not even use a traditional "Open Map" dialog. All the project's maps will be available in the "Maps" directory in the Asset Browser. I can even save a thumbnail of each map when the file is saved. To add a new map to a project, you would simply drag it into the Asset Browser, preferably in the "Maps" directory. Or maybe we will also include an "Open File" file requestor that can be used to import any map, material, texture, etc. into the project. The current design of the editor is the result of a lot of experimentation and testing, so I'll continue to play around with it as I add features.
     
    Michael Betke and I have traded a lot of projects back and forth, and this system is designed to make that task easier so we can all exchange self-contained projects. If you have any suggestions to add, post in the comments below, or in the feature requests forum.
     
    --Update--
    And here's the export screen. This is similar to the "publish" step. I need to write a parser that goes through source code to determine what files are actually needed in the project. The project template defines what file extensions to scan for used files:

  4. Josh
    Step 1. Add your image files into the project directory. I just added a bunch of TGA files, and they show up in the editor right away:

     
    Step 2. Right-click on a texture and select the "Generate Material" menu item.

     
    Step 3. There is no step 3. Your material is ready to use. The normal map was automatically detected and added to the material.

     
    Here it is in the material editor.

  5. Josh
    This is just a little walk down memory lane that pleasantly shows what has led to where we are today. Some of this predates the name "Leadwerks" entirely.
     
    Cartography Shop (2003)

     
    3D World Studio (2004)

     
    Leadwerks Game Engine 2 (2008)

     
    Leadwerks Game Engine 3 (2013)

  6. Josh

    Articles
    Autodesk 3ds Max now supports export of glTF models, as well as a new glTF material type. The process of setting up and exporting glTF models is pretty straightforward, but there are a couple of little details I wanted to point out to help prevent you from getting stuck. For this article, I will be working with the moss rocks 1 model pack from Polyhaven.
    Getting geometry into 3ds Max is simple enough. I imported the model as an FBX file.

    To set up the material, I opened the compact material editor and set the first slot to be a glTF material.

    Press the button for the base color map, and very importantly choose the General > Bitmap map type. Do not choose OSL > Bitmap Lookup or your textures won't export at all.

    Select your base color texture, then do the same thing with the normal and roughness maps, if you have them. 3ds Max treats metal / roughness as two separate textures, although you might be able to use the same texture both if it grabs the data from the green (roughness) and blue (metal) channels. This is something I don't know yet.

    Select the File > Export menu item to bring up the glTF export dialog. Uncheck the "Export glTF binary" option because we don't want to pack our model and textures into a single file: I don't know what the baked / original material option does because I don't see any difference when I use it.

    At this point you should have a glTF file that is visible in any glTF model viewer.

    Now something slightly weird max does is it generates some new textures for some of the maps. This is probably because it is combining different channels to produce final images. In this case, none of our textures need to be combined, so it is just a small annoyance. A .log file will be saved as well, but these can be safely deleted.

    You can leave the images as-is, or you can open up the glTF file in a text editor and manually change the image file names back to the original files:
    "images": [ { "uri": "rock_moss_set_01_nor_gl_4k.jpg" }, { "uri": "M_01___Defaultbasecolortexture.jpeg" }, { "uri": "M_01___Defaultmetallicroughnesstex.jpeg" } ], Finally, we're going to add LODs using the Mesh Editor > ProOptimizer modifier. I like these settings, but the most important thing is to make sure "Keep textures" is checked. You can press the F3 key at any time to toggle wireframe view and get a better view of what the optimizer does to your mesh.

    Export the file with the same name as the full-resolution model, and add "_lod1" to the end of the file name (before the extension). Then repeat this process saving lod2 and lod3 using 25% and 12.5% for the vertex reduction value in the ProOptimizer modifier.
    Here is my example model you can inspect:
    mossrock1a.zip
    Now it is very easy to get 3D models from 3ds max to your game engine.
  7. Josh
    The Industrial Cargo Model Pack is now available for only $9.95. This package includes three cargo containers and two wooden spools. All assets are certified game-ready for Leadwerks Engine. This is just some of Dave Lee's work, and there's lots more coming!
     

  8. Josh
    Now that we can voxelize models, enter them into a scene voxel tree structure, and perform raycasts we can finally start calculating direct lighting. I implemented support for directional and point lights, and I will come back and add spotlights later. Here we see a shadow cast from a single directional light:

    And here are two point lights, one red and one green. Notice the distance falloff creates a color gradient across the floor:

    The idea here is to first calculate direct lighting using raycasts between the light position and each voxel:

    Then once you have the direct lighting, you can calculate approximate global illumination by gathering a cone of samples for each voxel, which illuminates voxels not directly visible to the light source:

    And if we repeat this process we can simulate a second bounce, which really fills in all the hidden surfaces:

    When we convert model geometry to voxels, one of the important pieces of information we lose are normals. Without normals it is difficult to calculate damping for the direct illumination calculation. It is easy to check surrounding voxels and determine that a voxel is embedded in a floor or something, but what do we do in the situation below?

    The thin wall of three voxels is illuminated, which will leak light into the enclosed room. This is not good:

    My solution is to calculate and store lighting for each face of each voxel.
    Vec3 normal[6] = { Vec3(-1, 0, 0), Vec3(1, 0, 0), Vec3(0, -1, 0), Vec3(0, 1, 0), Vec3(0, 0, -1), Vec3(0, 0, 1) }; for (int i = 0; i < 6; ++i) { float damping = max(0.0f,normal[i].Dot(lightdir)); //normal damping if (!isdirlight) damping *= 1.0f - min(p0.DistanceToPoint(lightpos) / light->range[1], 1.0f); //distance damping voxel->directlighting[i] += light->color[0] * damping; } This gives us lighting that looks more like the diagram below:

    When light samples are read, the appropriate face will be chosen and read from. In the final scene lighting on the GPU, I expect to be able to use the triangle normal to determine how much influence each sample should have. I think it will look something like this in the shader:
    vec4 lighting = vec4(0.0f); lighting += max(0.0f, dot(trinormal, vec3(-1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(0.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(1.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, -1.0f, 0.0f)) * texture(gimap, texcoord + vec2(2.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 1.0f, 0.0f)) * texture(gimap, texcoord + vec2(3.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, -1.0f)) * texture(gimap, texcoord + vec2(4.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, 1.0f)) * texture(gimap, texcoord + vec2(5.0 / texwidth, 0.0)); This means that to store a 256 x 256 x 256 grid of voxels we actually need a 3D RGB texture with dimensions of 256 x 256 x 1536. This is 288 megabytes. However, with DXT1 compression I estimate that number will drop to about 64 megabytes, meaning we could have eight voxel maps cascading out around the player and still only use about 512 megabytes of video memory. This is where those new 16-core CPUs will really come in handy!
    I added the lighting calculation for the normal Vec3(0,1,0) into the visual representation of our voxels and lowered the resolution. Although this is still just direct lighting it is starting to look interesting:

    The last step is to downsample the direct lighting to create what is basically a mipmap. We do this by taking the average values of each voxel node's children:
    void VoxelTree::BuildMipmaps() { if (level == 0) return; int contribs[6] = { 0 }; for (int i = 0; i < 6; ++i) { directlighting[i] = Vec4(0); } for (int ix = 0; ix < 2; ++ix) { for (int iy = 0; iy < 2; ++iy) { for (int iz = 0; iz < 2; ++iz) { if (kids[ix][iy][iz] != nullptr) { kids[ix][iy][iz]->BuildMipmaps(); for (int n = 0; n < 6; ++n) { directlighting[n] += kids[ix][iy][iz]->directlighting[n]; contribs[n]++; } } } } } for (int i = 0; i < 6; ++i) { if (contribs[i] > 0) directlighting[i] /= float(contribs[i]); } } If we start with direct lighting that looks like the image below:

    When we downsample it one level, the result will look something like this (not exactly, but you get the idea):

    Next we will begin experimenting with light bounces and global illumination using a technique called cone tracing.
  9. Josh
    Ladies and gentlemen, come one, come all, to feast your eyes on wondrous sights and behold amazing feats! It's "Cirque des Jeux", the next Leadwerks Game Tournament!

    How does it work?  For one month, the Leadwerks community builds small playable games.  Some people work alone and some team up with others.  At the end of the month we release our projects to the public and play each other's games.  The point is to release something short and sweet with a constrained timeline, which has resulted in many odd and wonderful mini games for the community to play.
    WHEN: The tournament begins Thursday, February 1, and ends on Wednesday, February 28th at the stroke of midnight.
    HOW TO PARTICIPATE: Publish your Circus-or-other-themed game to the Games Showcase before the deadline. You can work as a team or individually. Use blogs to share your work and get feedback as you build your game.
    Games must have a preview image, title, and contain some minimal amount of gameplay (there has to be some way to win the game) to be considered entries. It is expected that most entries will be simple, given the time constraints.
    This is the perfect time to try making a VR game or finish that idea you've been waiting to make!
    PRIZES: All participants will receive a limited-edition 11x17" poster commemorating the event. To receive your prize you need to fill in your name, mailing address, and phone number (for customs) in your account info.
    At the beginning of March we will post a roundup blog featuring your entries. Let the show begin!
  10. Josh
    Vulkan gives us explicit control over the way data is handled in system and video memory. You can map a buffer into system memory, modify it, and then unmap it (giving it back to the GPU) but it is very slow to have a buffer that both the GPU and CPU can access. Instead, you can create a staging buffer that only the CPU can access, then use that to copy data into another buffer that can only be read by the GPU. Because the GPU buffer may be in-use at the time you want to copy data to it, it is best to insert the copy operation into a command buffer, so it happens after the previous frame is rendered. To handle this, we have a pool of transfer buffers which are retrieved by a command buffer when needed, then released back into the pool once that command buffer is finished drawing. A fence is used to tell when the command buffer completes its operations.
    One issue we came across with OpenGL in Leadwerks was when data was uploaded to the GPU while it was still being accessed to render a frame. You could actually see this on some cards when playing my Asteroids3D game. There was no mechanism in OpenGL to synchronize memory, so the best you could do was put data transfers at the start of your rendering code, and hope that there was enough of a delay before your drawing actually started that the memory copying had completed. With the super low-overhead approach of Vulkan rendering, this problem would become much worse. To deal with this, Vulkan uses explicit memory management with something called pipeline barriers. When you add a command into a Vulkan command buffer, there is no guarantee what order those commands will be executed in, and pipeline barriers allow you to create a point where certain commands must be executed before other ones can begin.
    Here are the order of operations:
    Start recording new command buffer. Retrieve staging buffer from pool and remove from pool. Copy data into staging buffer. Insert command to copy from staging buffer to the GPU buffer. Insert pipeline barrier to make sure data is transferred before drawing begins. Execute the command buffer. When the fence is completed, move all staging buffers back into the staging buffer pool. In the new game engine, we have several large buffers to store the following data:
    Mesh vertices Mesh indices Entity 4x4 matrices (and other info) A list of visible entity IDs Visible light information. Skeleton animation data I found this data tends to fall into two categories.
    Some data is large and only some of it gets updated each frame. This includes entity 4x4 matrices, skeleton animation data, and mesh vertex and index data. Other data tends to be smaller and only concerns visible objects. This includes visible entity IDs and light information. This data is updated completely each time a new visibility set arrives. The first type of data requires data buffers that can be resized, because they can be very large, and more objects or data might be added at any time. For example, the vertex buffer contains all vertices that exist, in all meshes the user creates or loads. If a new mesh is loaded that requires space greater than the buffer capacity, a new buffer must be created, then the full contents of the old buffer are copied over, directly in GPU memory. A new pipeline barrier is inserted to ensure the data transfer to the new buffer is finished, and then additional data is copied.
    The second type of data is a bit simpler. If the existing buffer is not big enough, a new bigger buffer is created. Since the entire contents of the buffer are uploaded with each new visibility set, there is no need to copy any existing data from the old buffer.
    I currently have about 2500 lines of Vulkan-specific code. Calling this "boilerplate" is disingenuous, because it is really specific to the way you set your renderer up, but the core mesh rendering system I first implemented in OpenGL is working and I will soon begin adding support for textures.
     
  11. Josh
    The environment probe feature is coming along nicely, spawning a whole new system I am tentatively calling "Deferred Image-Based Lighting". This adds image-based lighting to your scenes that accentuates the existing deferred renderer and makes for higher quality screenshots with lots of next-gen detail. The probes handle both image-based ambient lighting and reflections.
     
    Shadmar helped by implementing parallax cubemap correction. This provides an approximate volume the cubemap is rendered to and gives us a closer approximation of reflections. You can see a good side-by-side comparison of it vs. conventional cubemaps in Half-Life 2
    . It works best when the volume of the room is defined by the shape of the environment probe, so I changed probes to use a box for rendering instead of a sphere. The box is controlled by the object's scale, just like with decals, although I wonder if there would be a way to build these out of brushes in the future, since our CSG building tools are so convenient. In the screenshot below you can see I have added two environment probes to a map, one filling the volume of each room. Placing probes does require some amount of artistry as it isn't a 100% perfect system, but with a little experimentation it's easy to get really beautiful results. 

     
    Once probes are placed you will select a menu item to build global illumination. GI will be built for all probes and the results are instantly visible. The shot below is only using an environment probe for lighting, together with an SSAO post-processing effect. Because the ambient lighting and reflections are image-based, any bright surface can act as a light even though it isn't really.
     

     
    When combined with our direct deferred lighting, shadowed areas become much more complex and interesting. Note the light bouncing off the floor and illuminating the ceiling.
     

     
    The new feature works seamlessly with all existing materials and shaders, since it is rendered in a deferred step. It can be combined with a conventional ambient light level, or you can set the ambient light to black and just rely on probes to provide indirect lighting.
  12. Josh
    How do you like that clickbait title?
     
    I implemented analytics into Leadwerks Editor using GameAnalytics.com last month, in order to answer questions I had about user activity. Here's what I have learned.
     
    The number of active users is what I was hoping for. A LOT of people ran Leadwerks in the last month, but people don't usually use it every day, as the number of daily users is lower. The numbers we have are good though.
     
    A lot of people use the Workshop to install models and other items for their projects. Unfortunately, paid Workshop items cannot be purchased right now, as I am working with Valve to change some company financial information. i will have this running again as soon as possible.
     
    Boxes are the most common primitive created, by a huge margin, followed by cylinders, wedges, spheres, and cones, in that order. Maybe the list of available primitives should be rearranged based on this? Not a big deal, but it's interesting to see.
     
    There's definitely a disconnect between user activity and the community, so in the future I hope to encourage people using the program to register forum accounts and become active in the community.
     
    Overall, I am glad that analytics have allowed me to get a broad picture of collective user behavior so I am no longer working blindly. It's a blunt instrument, but it will be interesting to see how I can use it in the future to improve the user experience.
  13. Josh
    I got my hands on the GDC showcase scene by Michael Betke, and it's much more beautiful than I thought. I used this scene to rewrite and test the vegetation rendering code. Collision isn't working yet, but it's built into the routine. The results speak for themselves.
     
    Fortunately, Pure3D is soon offering a set of game-ready vegetation models, so you can create gorgeous landscapes like this.
  14. Josh
    This is a quick blog to fill you in on some of the features planned for Leadwerks Engine 3.0.
     
    -Any entity can be a physics body. The body command set will become general entity commands, i.e. Entity.SetVelocity(). You can make an entity physically interactive with Entity.SetShape( shape ).
     
    -Any entity can have any number of scripts attached to it. The basic model script functions will be supported for all entities, plus specialized script functions for certain entities. For example, particle emitters will have an optional script function that is called when a particle is reset. This could be used to make a fire emitter "burn" a mesh. Script functions are called in sequence, so you can attach a chain of scripts to an entity to combine behaviors.
     
    -Prefabs will be supported, with the ability to drag entities together to make a hierarchy, add lights, emitters, and other entities, attach scripts, and then save a prefab.
     
    -Record screenshots and movies from the editor!
     
    -Tons of scripted behavior you can attach to your own models, and lots of scripted assets you can download. Basic AI features will be supplied.
     
    -The editor is being designed so you only need a mouse to make games. If you want to script or program or make artwork you can, but at the simplest level, the whole workflow can be controlled in the editor, with reloading assets, automated imports, and lots of drag-and-drop functionality.
     
     
    That's all for now. I can't tell you some of the coolest features, but this gives you an idea of where we are headed.
  15. Josh
    A new update is available on the beta branch.
     
    The FPS player script has been fixed to allow throwing of objects when he is not holding a weapon.
     
    The script flowgraph system's behavior with prefabs has been changed. Script connections between prefabs will no longer be locked in the prefab file, and can be made in the map itself. The map version has been incremented, and maps saved with the new format will not load until you update or recompile your game. The changes in this system may have introduced some problems so please notify me of any maps that fail to load properly.
     
    You must update your project to get these changes. See the dialog when you select the File > Project Manager menu item. (Any files that are overwritten will have a backup copy made in the same directory.)
     
    You should only opt into the beta if you are comfortable using a potentially less stable version of the software that is still being tested.
  16. Josh
    Looking back at a recent blog, I talked briefly about my plans for 2016: My goals were the following:
    Paid Workshop items
    Release Game Launcher on Steam proper.
    More features.

     
    These goals are actually linked together. Let's focus on the amount and quality of games being released for the game launcher. Right now, we have a lot of variety of simple games, and some that are very fun, but we don't have any must-play hits yet. As long as the reviews look like this, the game launcher isn't ready to release.
     
    What can we do to facilitate better and more complex games? From what I have seen, reusable scripts have given Leadwerks users a huge boost in productivity, especially when combined with a 3D model. For example, the zombie DLC, FPS weapons, first-person player, and other reusable items have gotten a ton of use and allowed creation of many different games. Continuing to build a deeper more robust script environment will allow developers to easily set up more advanced gameplay. The SoldierAI and the way it breaks down the bullets into a separate script are a good example of this direction. For example, the same projectile script can be used for a turret. Our design of self-contained Lua scripts with inputs and outputs will prevent the system from getting overly complicated, as a complex C++ hierarchy of classes would.
     
    In the future, I think releasing more game-ready items in the Workshop, first with official Leadwerks scripts, and then with third-party scripts, will allow us to leverage the design of Leadwerks Game Engine and the tools like the flowgraph.
     
    Features I implement next are going to be specifically chosen because of their capacity for increasing the gameplay potential of Leadwerks games. Easy networking and a GUI come to mind.
     
    When I feel like we are really hitting our stride in terms of the games you can make with Leadwerks, that is when game launcher will be released. Leadwerks Game Engine is all about what the user can make.
  17. Josh
    I have basic point lights working in the Vulkan renderer now. There are no shadows or any type of reflections yet. I need to work out how to set up a depth pre-pass. In OpenGL this is very simple, but in Vulkan it requires another complicated mess of code. Once I do that, I can add in other light types (spot, box, and directional) and pull in the PBR lighting shader code. Then I will add support for a cubemap skybox and reflections, and then I will upload another update to the beta.

    Shadows will use variance shadow maps by default. With these, all objects must cast a shadow, but our renderer is so fast that this is not a problem. I've had very good results with these in earlier experiments.
    I then want to complete my work on voxel-based global illumination and reflections. I looked into Nvidia RTX ray tracing but the performance is awful even with a GEForce 2080. My voxel approach should provide good results with fast performance.
    Once these features are in place, I may release the new engine on Steam as a programming SDK, until the new editor is ready.
  18. Josh
    An HTML renderer for Linux has been implemented. This adds the welcome window and full Workshop inteface for Linux. This will be available soon for testing on the beta branch.
     

     

     

  19. Josh
    A new build is available on the beta branch with the following changes:
    Fixed sidepanel tab switching bug.
    Improved compatibility with intel graphics.
    Fixes instance rendering bug with instance count over 256.
    Added terrain hardware tessellation.

     
    To enable hardware tessellation, select the Tools > Options menu item, select the Video tab, and set the tessellation value. Terrain layers much have a displacement map to display displacement mapping. Each terrain layer has an adjustable displacement value to control the strength of the effect. Displacement maps look best when they are rendered from a high-polygon sculpt. The terrain wireframe shader has alsod been updated to visualize the tessellation in wireframe view. You must update your project to get the new versions of the terrain shaders.
     
    At this time, terrain will always use tessellation when it is enabled, even if no texture layers contain a displacement map. This will be changed in the future.
  20. Josh
    There are three low-level advancements I would like to make to Leadwerks Game Engine in the future:
    Move Leadwerks over to the new Vulkan graphics API.
    Replace Windows API and GTK with our own custom UI. This will closely resemble the Windows GUI but allow new UI features that are presently impossible, and give us more independence from each operating system.
    Compile the editor with BlitzMaxNG. This is a BMX-to-C++ translator that allows compilation with GCC (or VS, I suppose). This would allow the editor to be built in 64-bit mode.

     
    None of these enhancements will result in more or better games, and thus do not support our overarching goal. They also will each involve a significant amount of backtracking. For example, a new Vulkan renderer is pretty much guaranteed to be slower than our existing OpenGL renderer, for the first six months, and it won't even run on a Mac. A new GUI will involve lots of bugs that set us back.
     
    This is likely to be stuff that I just explore slowly in the background. I'm not going to take the next three months to replace our renderer with Vulkan. But this is the direction I want to move in.
  21. Josh
    This is something I typed up for some colleagues and I thought it might be useful info for C++ programmers.
    To create an object:
    shared_ptr<TypeID> type = make_shared<TypeID>(constructor args…) This is pretty verbose, so I always do this:
    auto type = make_shared<TypeID>(constructor args…) When all references to the shared pointer are gone, the object is instantly deleted. There’s no garbage collection pauses, and deletion is always instant:
    auto thing = make_shared<Thing>(); auto second_ref = thing; thing = NULL; second_ref = NULL;//poof! Shared pointers are fast and thread-safe. (Don’t ask me how.)
    To get a shared pointer within an object’s method, you need to derive the class from “enable_shared_from_this<SharedObject>”. (You can inherit a class from multiple types, remember):
    class SharedObject : public enable_shared_from_this<SharedObject> And you can implement a Self() method like so, if you want:
    shared_ptr<SharedObject> SharedObject::Self() { return shared_from_this(); } Casting a type is done like this:
    auto bird = dynamic_pointer_cast<Bird>(animal); Dynamic pointer casts will return NULL if the animal is not a bird. Static pointer casts don’t have any checks and are a little faster I guess, but there’s no reason to ever use them.
    You cannot call shared_from_this() in the constructor, because the shared pointer does not exist yet, and you cannot call it in the destructor, because the shared pointer is already gone!
    Weak pointers can be used to store a value, but will not prevent the object from being deleted:
    auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //creates a new shared pointer to “thing” auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; thing = NULL; shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //returns NULL! If you want to set a weak pointer’s value to NULL without the object going out of scope, just call reset():
    auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; thingwptr.reset(); shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //returns NULL! Because no garbage collection is used, circular references can occur, but they are rare:
    auto son = make_shared<Child>(); auto daughter = make_shared<Child>(); son->sister = daughter; daughter->brother = son; son = NULL; daughter = NULL;//nothing is deleted! The problem above can be solved by making the sister and brother members weak pointers instead of shared pointers, thus removing the circular references.
    That’s all you need to know!
  22. Josh
    Billboards, transitions, and rendering are now done, and as you can see in the screenshot below, the idea works great. In Leadwerks 2, this application would have used hundreds of megabytes of data to store all the individual 4x4 matrices of each object. In Leadwerks 3, the instances are generated on the fly, so the application uses no more memory than a map with a blank terrain:
     

     
    Still to do:
    Fade out billboards as they approach max view distance.
    Add rows to break up generated billboard textures.
    Add DXT compression to generated billboard textures.
    Add sub-grid rendering so GPU is only evaluating instances within a certain distance to the camera.
    Add physics collision.
    Vegetation painting tools in editor.

  23. Josh
    I implemented light bounces and can now run the GI routine as many times as I want. When I use 25 rays per voxel and run the GI routine three times, here is the result. (The dark area in the middle of the floor is actually correct. That area should be lit by the sky color, but I have not yet implemented that, so it appears darker.)


    It's sort of working but obviously these results aren't usable yet. Making matters more difficult is the fact that people love to show their best screenshots and love to hide the problems their code has, so it is hard to find something reliable to compare my results to.
    I also found that the GI pass, unlike all previous passes, is very slow. Each pass takes about 30 seconds in release mode! I could try to optimize the C++ code but something tells me that even optimized C++ code would not be fast enough. So it seems the GI passes will probably need to be performed in a shader. I am going to experiment a bit with some ideas I have first to provide better quality GI results first though.
     
  24. Josh
    I've got orthographic viewport navigation done. I decided to build some grid commands into the camera class so that the graphics driver itself can handle the grid rendering, rather than having the editor make a bunch of OpenGL calls in a callback function. The grid can be rendered from any angle, and the math was a little tricky, but I got it worked out. I paid extra attention to showing the border where the world ends. The sliders that pan the viewport are very accurate, and stop right at the end of the world space. By default, this is one kilometer, but the world size can be changed at any time.
     
    One thing that was tricky was that the grid can be any resolution, and the world can be any size, so there's no guarantee the edge of a grid patch will match the edge of the world. I ended up using clipping planes to solve this problem.
     

     


  25. Josh
    I've added a textfield widget script to the beta branch, and a new build, for (Lua interpreter, Windows only, at this time). The textfield widget allows editing of a single line of text. It's actually one of the more difficult widgets to implement due to all the user interaction features. Text is entered from the keyboard and may be selected with arrow keys or by clicking the mouse. A range of text can be selected by clicking and dragging the mouse, or by pressing an arrow key while the shift key is pressed.
     

     
    I had to implement an additional keyboard event. KeyDown and KeyEvents work for all keys, but KeyChar events are called when typing results in an actual character. The ASCII code of the typed character is sent in the data parameter of the event function:

    function Script:KeyChar( charcode ) end
     
    Making the caret indicator flash on an off goes against the event-driven nature of this system, but I think it's an important visual indicator and I wanted to include it. I went through a few ideas including a really over-engineered timer system. Finally I just decided to make the GUI call a function on the focused widget every 500 milliseconds (if the function is present in the widget's script):

    --Blink the caret cursor on and off function Script:CursorBlink() if self.cursorblinkmode == nil then self.cursorblinkmode = false end self.cursorblinkmode = not self.cursorblinkmode self.widget:Redraw() end
     
    All in all, the script weighs in at 270 lines of code. It does not handle cut, copy, and paste yet, and double-clicking to select the entire text does not yet consider spaces in the clicked word. The drawing function is actually quite simple, so you could easily skin this to get a different appearance and keep the same behavior.
     

    Script.caretposition=0 Script.sellen=0 Script.doubleclickrange = 1 Script.doubleclicktime = 500 function Script:Draw(x,y,width,height) local gui = self.widget:GetGUI() local pos = self.widget:GetPosition(true) local sz = self.widget:GetSize(true) local scale = gui:GetScale() local item = self.widget:GetSelectedItem() local text = self.widget:GetText() --Draw the widget background gui:SetColor(0.2,0.2,0.2) gui:DrawRect(pos.x,pos.y,sz.width,sz.height,0) --Draw the widget outline if self.hovered==true then gui:SetColor(51/255/4,151/255/4,1/4) else gui:SetColor(0,0,0) end gui:DrawRect(pos.x,pos.y,sz.width,sz.height,1) --Draw text selection background if self.sellen~=0 then local n local w local x = gui:GetScale()*8 local px = x local c1 = math.min(self.caretposition,self.caretposition+self.sellen) local c2 = math.max(self.caretposition,self.caretposition+self.sellen) for n=0,c2-1 do if n==c1 then px = x end c = String:Mid(text,n,1) x = x + gui:GetTextWidth(c) if n==c2-1 then w = x-px end end gui:SetColor(0.4,0.4,0.4) gui:DrawRect(pos.x + px,pos.y+2*scale,w,sz.height-4*scale,0) end --Draw text gui:SetColor(0.75,0.75,0.75) if text~="" then gui:DrawText(text,scale*8+pos.x,pos.y,sz.width,sz.height,Text.Left+Text.VCenter) end --Draw the caret if self.cursorblinkmode then if self.focused then local x = self:GetCaretCoord(text) gui:DrawLine(scale*8+pos.x + x,pos.y+2*scale,scale*8+pos.x + x,pos.y + sz.height-4*scale) end end end --Find the character position for the given x coordinate function Script:GetCharAtPosition(pos) local text = self.widget:GetText() local gui = self.widget:GetGUI() local n local c local x = gui:GetScale()*8 local count = String:Length(text) local lastcharwidth=0 for n=0,count-1 do c = String:Mid(text,n,1) lastcharwidth = gui:GetTextWidth(c) if x >= pos - lastcharwidth/2 then return n end x = x + lastcharwidth end return count end --Get the x coordinate of the current caret position function Script:GetCaretCoord() local text = self.widget:GetText() local gui = self.widget:GetGUI() local n local c local x=0 local count = math.min(self.caretposition-1,(String:Length(text)-1)) for n=0,count do c = String:Mid(text,n,1) x = x + gui:GetTextWidth(c) end return x end --Blink the caret cursor on and off function Script:CursorBlink() if self.cursorblinkmode == nil then self.cursorblinkmode = false end self.cursorblinkmode = not self.cursorblinkmode self.widget:Redraw() end function Script:MouseDown(button,x,y) self.focused=true if button==Mouse.Left then --Detect double-click and select entire text local currenttime = Time:Millisecs() if self.lastmousehittime~=nil then if math.abs(self.lastmouseposition.x-x)<=self.doubleclickrange and math.abs(self.lastmouseposition.y-y)<=self.doubleclickrange then if currenttime - self.lastmousehittime < self.doubleclicktime then self.lastmousehittime = currenttime local l = String:Length(self.widget:GetText()) self.caretposition = l self.sellen = -l self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.pressed=false self.widget:Redraw() return end end end self.lastmouseposition = {} self.lastmouseposition.x = x self.lastmouseposition.y = y self.lastmousehittime = currenttime --Position caret under mouse click self.cursorblinkmode=true self.caretposition = self:GetCharAtPosition(x) self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.pressed=true self.sellen=0 self.widget:Redraw() end end function Script:MouseUp(button,x,y) if button==Mouse.Left then self.pressed=false end end function Script:MouseMove(x,y) if self.pressed then --Select range of characters local currentcaretpos = self.caretposition local prevcaretpos = self.caretposition + self.sellen self.cursorblinkmode=true self.caretposition = self:GetCharAtPosition(x) if self.caretposition ~= currentcaretpos then self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.sellen = prevcaretpos - self.caretposition self.widget:Redraw() end end end function Script:LoseFocus() self.focused=false self.widget:Redraw() end function Script:MouseEnter(x,y) self.hovered = true self.widget:Redraw() end function Script:MouseLeave(x,y) self.hovered = false self.widget:Redraw() end function Script:KeyUp(keycode) if keycode==Key.Shift then self.shiftpressed=false end end function Script:KeyDown(keycode) if keycode==Key.Shift then self.shiftpressed=true end if keycode==Key.Up or keycode==Key.Left then --Move the caret one character left local text = self.widget:GetText() if self.caretposition>0 then self.caretposition = self.caretposition - 1 self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true if self.shiftpressed then self.sellen = self.sellen + 1 else self.sellen = 0 end self.widget:Redraw() end elseif keycode==Key.Down or keycode==Key.Right then --Move the caret one character right local text = self.widget:GetText() if self.caretposition<String:Length(text) then self.caretposition = self.caretposition + 1 self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true if self.shiftpressed then self.sellen = self.sellen - 1 else self.sellen = 0 end self.widget:Redraw() end end end function Script:KeyChar(charcode) local s = self.widget:GetText() local c = String:Chr(charcode) if c=="\b" then --Backspace if String:Length(s)>0 then if self.sellen==0 then if self.caretposition==String:Length(s) then s = String:Left(s,String:Length(s)-1) elseif self.caretposition>0 then s = String:Left(s,self.caretposition-1)..String:Right(s,String:Length(s)-self.caretposition) end self.caretposition = self.caretposition - 1 self.caretposition = math.max(0,self.caretposition) else local c1 = math.min(self.caretposition,self.caretposition+self.sellen) local c2 = math.max(self.caretposition,self.caretposition+self.sellen) s = String:Left(s,c1)..String:Right(s,String:Length(s) - c2) self.caretposition = c1 self.sellen = 0 end self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.widget:SetText(s) EventQueue:Emit(Event.WidgetAction,self.widget) end elseif c~="\r" and c~="" then --Insert a new character local c1 = math.min(self.caretposition,self.caretposition+self.sellen) local c2 = math.max(self.caretposition,self.caretposition+self.sellen) s = String:Left(s,c1)..c..String:Right(s,String:Length(s) - c2) self.caretposition = self.caretposition + 1 if self.sellen<0 then self.caretposition = self.caretposition + self.sellen end self.sellen=0 self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.widget:SetText(s) EventQueue:Emit(Event.WidgetAction,self.widget) end end
×
×
  • Create New...