Jump to content

Josh

Staff
  • Posts

    23,217
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    The Leadwerks Merc character, who I think will go down in history over the next few years second only to the infamous "Crawler", is an experiment. First of all, I wanted a completely custom-made character to developer AI with. This ensured that I was able to get the model made exactly to my specs so that we would have a template for other characters to follow. Fortunately, the script I wrote can easily be used with other models like Arteria's Strike Troop.
     
    The quality of this model is really high and I am really happy to be able to provide high-end content like this for use with Leadwerks. The cost of production was $2965, plus $125 for the weapon model, for a total of $3090. Additional characters can be made at a slightly lower cost, if I choose to go forward with production. At $9.99, we need to sell 309 copies to break even.
     
    Leadwerks model packs sell quite well as DLCs, but so far Workshop Store sales have been slower. It seems that getting people to use the new store is a task itself, because people are unfamiliar with the process.
     
    In the first two weeks, the Merc character has sold 13 copies for a total of $129 in sales (times 70% after the Steam tax). Assuming a flat rate of sales, this means that the character will take one year and four months before sales have covered costs.
     
    This is not an acceptable rate of recovering costs. I am not going forward with additional custom characters based on these results. Not sure how this will go forward, but I am halting all production for now.
  2. Josh
    Gamers have always been fascinated with the idea of endless areas to roam.  It seems we are always artificially constrained within a small area to play in, and the possibility of an entire world outside those bounds is tantalizing.  The game FUEL captured this idea by presenting the player with an enormous world that took hours to drive across:
    In the past, I always implemented terrain with one big heightmap texture, which had a fixed size like 1024x1024, 2048x2048, etc.  However, our vegetation system, featured in the book Game Engine Gems 3, required a different approach.  There was far too many instances of grass, trees, and rocks to store them all in memory, and I wanted to do something really radical.  The solution was to create an algorithm that could instantly calculate all the vegetation instances in a given area.  The algorithm would always produce the same result, but the actual data would never be saved, it was just retrieved in the area where you needed it, when you needed it.  So with a few modifications, our vegetation system is already set up to generate infinite instances far into the distance.

    However, terrain is problematic.  Just because an area is too far away to see doesn't mean it should stop existing.  If we don't store the terrain in memory then how do we prevent far away objects from falling into the ground?  I don't like the idea of disabling far away physics because it makes things very complex for the end user.  There are definitely some tricks we can add like not updating far away AI agents, but I want everything to just work by default, to the best of my ability.
    It was during the development of the vegetation system that I realized the MISSING PIECE to this puzzle.  The secret is in the way collision works with vegetation.  When any object moves all the collidable vegetation instances around it are retrieved and collision is performed on this fetched data.  We can do the exact same thing with terrain   Imagine a log rolling across the terrain.  We could use an algorithm to generate all the triangles it potentially could collide with, like in the image below.

    You can probably imagine how it would be easy to lay out an infinite grid of flat squares around the player, wherever he is standing in the world.

    What if we only save heightmap data for the squares the user modifies in the editor?  They can't possibly modify the entire universe, so let's just save their changes and make the default terrain flat.  It won't be very interesting, but it will work, right?
    What if instead of being flat by default, there was a function we had that would procedurally calculate the terrain height at any point?  The input would be the XZ position in the world and the output would be a heightmap value.

    If we used this, then we would have an entire procedurally generated terrain combined with parts that the developer modifies by hand with the terrain tools.  Only the hand-modified parts would have to be saved to a series of files that could be named "mapname_x_x.patch", i.e. "magickingdom_54_72.patch".  These patches could be loaded from disk as needed, and deleted from memory when no longer in use.
    The real magic would be in developing an algorithm that could quickly generate a height value given an XZ position.  A random seed could be introduced to allow us to create an endless variety of procedural landscapes to explore.  Perhaps a large brush could even be used to assign characteristics to an entire region like "mountainy", "plains", etc.
    The possibilities of what we can do in Leadwerks Engine 5 are intriguing.  Granted I don't have all the answers right now, but implementing a system like this would be a major step forward that unlocks an enormous world to explore.  What do you think?

  3. Josh
    Current generation graphics hardware only supports up to a 32-bit floating point depth buffer, and that isn't adequate for large-scale rendering because there isn't enough precision to make objects appear in the correct order and prevent z-fighting.

    After trying out a few different approaches I found that the best way to support large-scale rendering is to allow the user to create several cameras. The first camera should have a range of 0.1-1000 meters, the second would use the same near / far ratio and start where the first one left off, with a depth range of 1000-10,000 meters. Because the ratio of near to far ranges is what matters, not the actual distance, the numbers can get very big very fast. A third camera could be added with a range out to 100,000 kilometers!
    The trick is to set the new Camera::SetClearMode() command to make it so only the furthest-range camera clears the color buffer. Additional cameras clear the depth buffer and then render on top of the previous draw. You can use the new Camera::SetOrder() command to ensure that they are drawn in the order you want.
    auto camera1 = CreateCamera(world); camera1->SetRange(0.1,1000); camera1->SetClearMode(CLEAR_DEPTH); camera1->SetOrder(1); auto camera2 = CreateCamera(world); camera2->SetRange(1000,10000); camera2->SetClearMode(CLEAR_DEPTH); camera2->SetOrder(2); auto camera3 = CreateCamera(world); camera3->SetRange(10000,100000000); camera3->SetClearMode(CLEAR_COLOR | CLEAR_DEPTH); camera3->SetOrder(3); Using this technique I was able to render the Earth, sun, and moon to-scale. The three objects are actually sized correctly, at the correct distance. You can see that from Earth orbit the sun and moon appear roughly the same size. The sun is much bigger, but also much further away, so this is exactly what we would expect.

    You can also use these features to render several cameras in one pass to show different views. For example, we can create a rear-view mirror easily with a second camera:
    auto mirrorcam = CreateCamera(world); mirrorcam->SetParent(maincamera); mirrorcam->SetRotation(0,180,0); mirrorcam=>SetClearMode(CLEAR_COLOR | CLEAR_DEPTH); //Set the camera viewport to only render to a small rectangle at the top of the screen: mirrorcam->SetViewport(framebuffer->GetSize().x/2-200,10,400,50); This creates a "picture-in-picture" effect like what is shown in the image below:

    Want to render some 3D HUD elements on top of your scene? This can be done with an orthographic camera:
    auto uicam = CreateCamera(world); uicam=>SetClearMode(CLEAR_DEPTH); uicam->SetProjectionMode(PROJECTION_ORTHOGRAPHIC); This will make 3D elements appear on top of your scene without clearing the previous render result. You would probably want to move the UI camera far away from the scene so only your HUD elements appear in the last pass.
  4. Josh
    As I was implementing the collision commands for Leadwerks3D, I noticed a few things that illustrate the differences between the design philosophies of Leadwerks Engine and Leadwerks3D.
     
    You'll easily recognize the new collision commands, although they have been named in a more verbose but logical manner:

    void ClearCollisionResponses(); void SetCollisionResponse(const int& collisiontype0, const int& collisiontype1, const int& response); int GetCollisionResponse(const int& collisiontype0, const int& collisiontype1);
    In Leadwerks Engine, the collisions were left to the end user to define however they wished. In Leadwerks3D, we have some built-in collision types that are declared in the engine source code:

    const int COLLISION_SCENE = 1; const int COLLISION_CHARACTER = 2; const int COLLISION_PROP = 3; const int COLLISION_DEBRIS = 4;
    By default, the following collision responses are created automatically:

    SetCollisionResponse(COLLISION_SCENE,COLLISION_CHARACTER,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_CHARACTER,COLLISION_CHARACTER,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_PROP,COLLISION_CHARACTER,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_DEBRIS,COLLISION_SCENE,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_PROP,COLLISION_PROP,COLLISION_COLLIDE); SetCollisionResponse(COLLISION_DEBRIS,COLLISION_PROP,COLLISION_COLLIDE);
    Each entity's default collision type is COLLISION_PROP, which means that without specifying any collision responses, all entities collide with one another.
     
    Of course if you want to scrap all my suggested collision responses and define your own, you can just call ClearCollisionResponses() at the start of your program. The main difference in design is that Leadwerks3D assumes the user wants some default behavior already specified, instead of being a completely blank slate. That's a design decision that is being implemented across all aspects of the engine to make it easier to get started with.
  5. Josh
    Leadwerks3D will ship with a finished game demo to demonstrate how to use the software. Darkness Awaits is a third-person dungeon explorer with a 45 degree view. It's like a cross between Diablo and Legend of Zelda: A Link to the Past. This is an idea I had back in my very early days of game development, before I even learned to program. This was originally done in the Quake 1 engine. It didn't really go anywhere, but what we had was awesome. You could run around and shoot skeletons with flaming arrows, in a third-person view. B) My job was actually map design. My favorite part was this cozy little house you started off in with a fireplace and a warm feeling, before going outside into a snow storm to kill monsters.
     
    And so, the project is being resurrected to demonstrate game development with Leadwerks3D. Below are a few possible logo styles. Which is your favorite?
     




  6. Josh
    I experienced some problems this week when I tried to create an executable with Visual Studio 2008 for deployment. On Windows 7 test machines I would get this error:
     
    I finally tracked the solution down to a Visual Studio project setting. In Project Settings > Configuration Properties > C/C++ > Code Generation there is a property called "Runtime Library". By default it is set to rely on external DLLs. Change these values to non-DLL settings (like MT or MTd) and they will include the runtime library in the executable. This makes for bigger executables, but they will run everywhere.

     
    Having experienced this problem with Visual Studio 2010, I guessed this was the same issue. I uninstalled the Visual C++ 2010 Redistributable Package from my Windows XP machine, then created a simple test program on another Windows 7 machine. First I made sure I could create the problem, by using the default runtime library. Ever seen this error?:

     
    I've seen this issue come up before on this forum. Now you know how to solve it and build Debug or Release executables that will run everywhere, with Visual Studio 2008 and 2010. Here is my test application from Visual Studio 2010, "testapp.MDd.bat" will only run if you have the 2010 Redistributable Package installed. "testapp.MTd.bat" will run everywhere:
    testapp.zip
     
    Since we no longer have problems creating executables for Windows XP with Visual Studio 2010, I see no reason not to move our project on to a newer version. Visual Studio 2012 currently cannot make Windows XP executables, but Microsoft claims this will be supported soon. However, even if deployment to Windows XP is supported, Visual Studio 2012 itself will never run on Windows XP. Switching to Visual Studio 2012 would mean you cannot program Leadwerks with C++ on Windows XP.
     
    What do you think? Should we make the move to 2012, 2010, or stick with 2008?
  7. Josh
    I've got a couple days sales data from the 3.4 launch and it went well. Our conversion ratio and all that boring businessey stuff is good, and I know more now than I did last week. I think our intro video could use some updating and a little more professional polish, so I am looking for a video production company to create something new.
     
    I was planning to make a trip back to Caifornia soon. I was going to skip the GDC, but I then got invited to the Valve party. I guess they're hiring the band Glitch Mob to play at Folsom Nightclub, which is kind of awesome.


     
    This isn't enough to make me want to go, but I remember my Valve VR demo came about because I was randomly talking to someone at Steam Dev Days and they emailed me about it later. So even if you don't get something out of it every time, I think it's necessary that I do these kind of networking things. Not sure if I'll attend the GDC expo itself or not yet, I need to figure out my plans today.
     
    When I get back, work will begin on version 3.5, due out this summer. I want this update to include carving, vegetation, vertex editing, and I have a few other features in mind that will be included based on how much time I have.
     
    There's also a class of improvements I want to make that are more "systemic". This includes driver bugs from third-party vendors (looking at you AMD), bugs or bad design in third-party libraries (GTK), and some improvements I can make in our core renderer design. Uniform buffers and FBOs are two low-level systems I am looking at. I think this will solve the Linux model editor issue and improve performance quite a lot on Linux systems in particular, and have some gains on Windows.
     
    These are potentially destabilizing changes I do not want to attempt this week. I need to be able to do a quick turnaround if any small bugs are discovered right now, and having the renderer all taken apart isn't a good idea at the moment.
     
    Finally, we have a problem with tutorials that needs to be addressed. I keep seeing post after post of people asking where the tutorials are. When you see threads like this, it indicates a problem. I have tried to make the documentation and tutorials pages as "in your face" as possible, yet people still ask where they are...it's the very first thing you see when you run the program! This is probably the most important problem to solve going forward, and has a much stronger effect on us than any feature I can implement or improvement I can make to the software itself.
  8. Josh

    Articles
    Midjourney is an AI art generator you can interact with on Discord to make content for your game engine. To use it, first join the Discord channel and enter one of the "newbie" rooms. To generate a new image, just type "/imagine" followed by the keywords you want to use. The more descriptive you are, the better. After a few moments four different images will be shown. You can upsample or create new variations of any of the images the algorithm creates.

    And then the magic begins:

    Here are some of the images I "created" in a few minutes using the tool:

    I'm really surprised by the results. I didn't think it was possible for AI to demonstrate this level of spatial reasoning. You can clearly see that it has some kind of understanding of 3D perspective and lighting. Small errors like the misspelling of "Quake" as "Quke" only make it creepier, because it means the AI has a deep level of understanding and isn't just copying and pasting parts of images.
    What do you think about AI-generated artwork? Do you have any of your own images you would like to show off? Let me know in the comments below.
  9. Josh
    AI is always a fun programming topic, and it's even more fun when you're mixing a physics-based character controller with dynamic navmesh pathfinding.
     
    We planned on using navmesh pathfinding from the very start of the design of the new engine. Because it was integrated from the very beginning our implementation works really nicely. Pathfinding is completely automatic and dynamic. There are no commands you need to call to make it work. When part of the scene changes, the navigation data for that sector is automatically recalculated. If you haven't seen it already, you can check out some videos on the Leadwerks YouTube channel:


     
    Character controllers are a special physics body used to control the movement of characters in the game. They are used for both players as a control method, and by enemies and NPCs. Sometimes they will be controlled by keyboard or touch input, and sometimes they are controlled by AI navigation. Consequently, the character controller class has to be able to handle both.
     
    There are so many cool things I can do that it's fun and a little scary. Right now I am toying with the following API. The first two commands would just make the character constantly move to the position or follow the entity you specify, so there's only need to call them once, and the engine will handle the rest:

    bool CharacterController::GoToPoint(const float& x, const float& y, const float& z) bool CharacterController::Follow(Entity* entity) void CharacterController::Stop()
     
    And then you still have manual controls, which are analogous to "UpdateController" from LE2:

    CharacterController::SetInput(const float& angle, const float& movement, const float& strafe...)
     
    In the screenshot below, I control the rightmost character with the keyboard, while the others are programmed to follow me:

  10. Josh
    I've been working on water. It's a challenge to get it to work properly with the post-effects stack and lighting, something I never did quite right in Leadwerks 2. However, the results are turning into the best implementation of water I've ever made.
    Water blends naturally with post-effects stack.
    Water reflection/refraction equation and behavior is improved over Leadwerks 2.
    Water ripples don't have an obvious direction of flow and look very realistic, without looking "wrong" when placed in certain maps (i.e. water flowing sideways against a coastline).

     
    I am not attempting to do 3D ocean waves like in Crysis and some other games. Although these techniques came close to looking real, I feel like they haven't aged well and sit in the uncanny valley and sort of look like saran wrap when in motion. They also aren't as versatile as the water I am making, and only look good in some specific types of settings.
     
    Here's a WIP image with a bunch of post-effects applied to make sure everything works right together.
     

     
    I also met with Valve and asked them to add a desktop version of the in-app purchase dialog for user items. This will allow other users to easily purchase the items you publish for use with Leadwerks.
     

  11. Josh
    Since you guys are my boss, in a way, I wanted to report to you what I spent the last few days doing.
     
    Early on in the development of Leadwerks 3, I had to figure out a way to handle the management of assets that are shared across many objects. Materials, textures, shaders, and a few other things can be used by many different objects, but they consume resources, so its important for the engine to clean them up when they are no longer needed.
     
    I decided to implement an "Asset" and "AssetReference" class. The AssetReference object contains the actual data, and the Asset object is just a handle to the AssetReference. ("AssetBase" probably would have been a more appropriate name). Each AssetReference can have multiple instances (Assets). When a new Asset is created, the AssetReference's instance count is incremented. When an Asset is deleted, its AssetReference's instance counter is decremented. When the AssetReference instance counter reaches zero, the AssetReference object itself is deleted.
     
    Each different kind of Asset had a class that extended each of these classes. For example, there was the Texture and TextureReference classes. The real OpenGL texture handle was stored in the TextureReference class.
     
    Normal usage would involve deleting extra instances of the object, as follows:

    Material* material = Material::Create(); Texture* texture = Texture::Load("myimage.tex"); material->SetTexture(texture);// This will create a new instance of the texture delete texture;
     
    This isn't a bad setup, but it creates a lot of extra classes. Remember, each of the AssetReference classes actually get extended for the graphics module, so we have the following classes:

    Asset Texture AssetReference TextureReference OpenGL2TextureReference OpenGLES2TextureReference
     
    The "Texture" class is the only class the programmer (you) needs to see or deal with, though.
     
    Anyways, this struck me as a bad design. Several months ago I decided I would redo this before the final release. I got rid of all the weird "Reference" classes and instead used reference counting built into the base Object class, from which these are all derived.
     
    The usage for you, the end user, isn't drastically different:
     

    Material* material = Material::Create(); Texture* texture = Texture::Load("myimage.tex"); material->SetTexture(texture); texture->Release();// Decrements the reference counter and deletes the object when it reaches zero
     
    In the Material's destructor, it will actually decrement each of its texture objects, so they will automatically get cleaned up if they are no longer needed. In the example below, the texture will be deleted from memory at the end of the code:

    Material* material = Material::Create(); Texture* texture = Texture::Load("myimage.tex"); material->SetTexture(texture); texture->Release(); material->Release();// texture's ref count will be decremented to zero and it will be deleted
     
    If you want to make an extra "instance" (not really) of the Asset, just increment the reference counter:
     

    Material* material = Material::Create(); Texture* texture = Texture::Load("myimage.tex"); texture->IncRefCount();// increments ref count from 1 to 2 Texture* tex2 = texture; texture->Release();// decrements ref count from 2 to 1
     
    This makes the code smaller, gets rid of a lot of classes, and I think it will be easier to explain to game studios when I am trying to sell them a source code license.
     
    Naturally any time you make systemic changes to the engine there will be some bugs here and there, but I have the engine and editor running, and mistakes are easy to find when I debug the static library.
     
    I also learned that operation overloading doesn't work with pointers, which i did not know, but had never had a need to try before. Originally, I was going to use operation overloads for the inc/dec ref count commands:

    Texture* tex = Texture::Create();//refcount=1 tex++;//refcount=2 tex--;//refcount=1 tex--;// texture would get deleted here
     
    But that just messes with the pointer! So don't do things like that.
  12. Josh
    Previously, I described the goals and philosophy that were guiding my design of our implementation of the Leadwerks Workshop on Steam. To review, the goals were:
    1. Frictionless sharing of items within the community.
    2. Protection of intellectual property rights.
    3. Tracking of the chain-of-authorship and support for derivative works.
     
    In this update I will talk more specifically about how our implementation meets these goals.
     
    Our implementation of the Steam Workshop allows Leadwerks developers to publish game assets directly to Steam. A Workshop item is typically a pack of similar files, like a model or texture pack, rather than single files:

     
    To add an item to Leadwerks, simply hit the "Subscribe" button in Steam and the item will become available in a new section of the asset browser:

     
    You can drag Workshop files into your scene and use them, just like a regular file. However, the user never needs to worry about managing these files; All subscribed items are available in the editor, no matter what project you are working on. When a file is used in a map or applied to a model, a unique global ID for that file is saved, rather than a file path. This allows the item author to continue updating and improving the file without ever having to re-download files, extract zip archives, or any other mess. Effectively, we are bringing the convenience of Steam's updating system to our community, so that you can work together more effectively. Here's one of the tutorial maps using materials from a sci-fi texture pack from the Workshop. When the map is saved, the unique file IDs are stored so I can share the map with others.

     
    Publishing your own Workshop packages is easy. A built-in dialog allows you to set a title, description, and a preview image. You can add additional images and even videos to your item in Steam:

     
    Leadwerks even has support for derivative works. You can create a model, prefab, or map that uses another Workshop file and publish it to Steam. Since Leadwerks tracks the original ID of any Workshop items you used, they will always be pulled from the original source. This allows an entirely new level of content authors to add value to items downstream from their origin, in a way similar to how Linux distributions have grown and evolved. For example, maybe you don't have the artistic skill to make every single texture you need for a house, but you can put together a pretty nice house model and pant it with another user's textures. You can then upload that model right back to the Workshop, without "ripping off" the texture artist; their original package will still be needed to load the textures. It's perfectly fine to change the name of your Workshop package at any time, and you never need to worry about your file names conflicting with files in other packages. (If you decide you want to change a lot of file names, it's best to just create a new package so that you don't interrupt the work of users "downstream" from you,)
     
    Uninstalling a Workshop package just requires you to hit the "unsubscribe" button on the item's page in the Steam Workshop. No more hunting around for stray zip files! You can easily check out other users' work, use whatever you like, and unsubscribe from the packages you don't like, with no mess at all.
     
    How Do I Get It?
    The Leadwerks Workshop beta begins today. You must be a member of the Leadwerks Developer group on Steam to access the Workshop. A limited number of beta invites are being sent out. Once the system is completely polished, we will make it available to the entire Leadwerks community.
  13. Josh
    Leadwerks Game Engine 4.4 has been updated on the beta branch on Steam.
    Networking finished and documented. GUI finished. All new physics features finished. The character controller physics and picking up objects has been improved and made smoother.  There is a problem with the player sliding down slopes, as seen in the FPS Character Controller example map.  I will work this out.
    I also noticed during testing that picking up some objects in the FPS / AI map will freeze the game.  Will check it out.
    I have not actually tested compiling on Linux yet, because my Linux machine is in the office and I am at home right now.  I'm heading in this afternoon, at which point I will complete Linux testing.
    The only other problem is that vehicles are not working yet.  I'm not sure yet how I will proceed with this.
    Updating C++ Projects
    The following changes are needed to update your C++ projects:
    Visual Studio
    Add these include header search directories:
    $(LeadwerksHeaderPath)\Libraries\NewtonDynamics\packages\thirdParty\timeTracker Add these input libraries:
    newton_d.lib;dContainers_d.lib;dCustomJoints_d.lib; (debug) newton.lib;dContainers.lib;dCustomJoints.lib; (release) Code::Blocks
    Add these include header search directories:
    $(LeadwerksPath)/Include/Libraries/NewtonDynamics/packages/thirdParty/timeTracker You also need the dev files for libcurl:
    sudo apt-get install libcurl4-openssl-dev This is pretty much the finished 4.4, so please test it and post any bug reports you have.  Thank you.
  14. Josh

    Articles
    I wanted to take some time to investigate geospatial features and see if it was possible to use GIS systems with Ultra Engine. My investigations were a success and resulted in some beautiful lunar landscapes in a relatively short period of time in the new game engine.

    I have plans for this, but nothing I can say anything specific about right now, so now I am working on the editor again.
    Leadwerks had a very well-defined workflow, which made it simple to use but also somewhat limited. Ultra goes in the opposite direction, with extremely flexible support for plugins and extensions. It's a challenge to keep things together because if you make things to flexible it just turns into indecisiveness, and nothing will ever get finished that way. I think I am finding a good balance or things that should be hard-coded and things that should be configurable.
    Instead of separate windows for viewing models, textures, and materials, we just have one asset editor window, with tabs for separate items. The texture interface is the simplest and just shows some information about the asset on the left-side panel.

    When you select the Save as menu item, the save file dialog is automatically populated with information loaded from all the plugins the engine has loaded. You can add support for new image formats at any time just by dropping a plugin in the right folder. In fact, the only hard-coded format in the list is DDS.

    You can save to an image format, or to the more advanced DDS and KTX2 formats, which include mipmaps and other features. Even more impressive is the ability to load Leadwerks texture files (TEX) and save them directly into DDS format in their original pixel format, with no loss of image quality. Here we have a Leadwerks texture that uses DXT1 compression, converted to DDS without recompressing the pixels:

    There are several tricks to make the new editor start up as fast as possible. One of these is that the various windows the editor uses don't actually get created until the first time they are used. This saves a little bit of time to give the application a snappier feel at startup.
    I hope the end result will provide a very intuitive workflow like the Leadwerks editor has, with enough power and flexibility to make the editor into a living growing platform with many people adding new capabilities for you to use. I think this is actually my favorite stuff to work on because this is where all the technology is exposed to you, the end user, and I get to carefully hand-craft an interface that makes you feel happy when you use it.
  15. Josh
    I got my idea working with spot lights! You can simply use a shadow mode of 2 to indicate an object or light should be considered "static". A light that has the static mode set will use two shadow maps; one for static objects and one for dynamic objects. In the image below, the walls and room are static, and the oildrum has the regular dynamic shadow mode set. As you can see, redrawing the shadow only requires 650 triangles. Without this feature, the whole room and everything in it would have to be redrawn any time something moved!
     



    Best of all, if the light moves or if any static object within the light's volume moves, both the static and dynamic shadowmaps are redrawn. The process is seamless for the end user. Dynamic lighting is usually dynamic, and baked lighting is usually static. I'm not sure what this qualifies as, but one thing is for sure...it's optimal!
     
    This will yield a huge performance increase for scenes with lots of point and spot lights, as long as those lights remain in a fixed position. You should avoid having lots of moving point and spot lights.
  16. Josh
    CSG carving now works very nicely. The hardest part was making it work with the undo system, which was a fairly complicated problem. It gets particularly difficult when brushes are in the scene hierarchy as a parent of something or a child of something else. Carving involves the deletion of the original object and creation of new ones, so it is very hard to describe in a step-wise process for the undo system, especially when all manner of objects can be intermixed with the brush hierarchy.
     

     
    Eliminating the ability for brushes to be parented to another object, or another object to be parented to them, would solve this problem but create a new one. Compound brushes such as arches and tubes are created as children of a pivot so that when you select one part, the entire assembly gets selected. Rather than a hierarchy, the old "groups" method from 3D World Studio would be much better here. It becomes a problem when you carve a cylinder out of a brush and are left with a lot of fragments that must be individually selected.
     
    At the same time, the scene tree needs some love. There's too many objects in it. The primary offenders are brushes, which maps just tend to have a lot of, and limbs in character models.
     

     
    Filters like in Visual Studio could help to eliminate some of the noise here. I would set it up to automatically add new brushes into a "Brushes" folder in the scene tree. What would be really nice is to eliminate model subhierarchies from the scene tree. This means the limbs of a model would be effectively invisible to the editor, but would still allow parenting of different objects to one another. In the screenshot below, the crawler skeletal hierarchy is not shown in the editor, but he has a particle emitter parented to him Opening the model in the model editor will still show the entire sub-hierarchy of bones.
     
    Notice the character has a point light as a child, but its own limb hierarchy is tucked away out of sight.
     

     
    If the model sub-hierarchy were not shown in the scene tree, this would free me from having to treat hierarchies as groups, and implement a grouping feature like 3D World Studio had. Filters could be added as well, instead of relying on a pivot to organize the scene hierarchy.
     

     

     
    To sum up, this change would cause group selection to be based on user-defined groups rather than limb hierarchy. The sub-hierarchy of model limbs would no longer be visible in the editor. Objects could still be parented to one another, and they would be parented to the base model rather than a specific limb.
     
    What changes:
    Ability to adjust individual limbs in model hierarchy. For example, if you wanted to add a script to a tank's turret, the turret should be a separate model file you parent to the tank body in the editor. It would not be possible to set the color of one individual limb in a model file, unless you had a script that found the child and did this. Setting a color of the model would affect the entire model, including all limbs.
    Ability to parent a weapon to one specific bone in the editor. Scripts cam still call FindChild() to do this in code. Which is what I suspect most people do anyways, because diving into the mesh limbs to find one specific bone is pretty difficult.

     
    What we gain:
    One button to group / ungroup objects.
    No more confusing multi-selection of entire limb hierarchies.
    Compound brushes no longer need a pivot to hold them together, can be created as a new group.
    Carved brush fragments can be automatically grouped for easy selection.
    If model hierarchy changes, reloaded maps match the model file more accurately; no orientation of limbs in unexpected places.
    Faster, less bloated scene tree without thousands of unwanted objects.
    Better entity selection method to replace drag and drop.
    The scene tree in general feels a lot less weighted down and a lot more manageable.

     
    It is possible this may modify the way some maps appear in the editor, but it would be an "edge case". Improvements are needed to accommodate brush carving and address the present limitations of the scene tree. This does not involve any changes to the engine itself, except updating the map load routine to read the additional data.
  17. Josh
    My productivity has not been good lately because I am agonizing over where to live. I don't like where I am right now. Decisions like this were easy in college because I would just go where the best program was for what I was interested in. It's more difficult now to know if I am making the right choice.
     
    I can "go big" and move to San Francisco, which weirdly has become the capitol of tech. I lived there before and know all the craziness to expect. I used to work at 24th and Mission (before Leadwerks existed) which was and still is a dangerous neighborhood, but if you go one block north there's now an Alamo Drafthouse where drug dealers used to offer me crack cocaine. (I was gentrifying the Mission before it was cool.) San Francisco is a stressful place full of insecure nasty people but there's also a ton of stuff to do and it's full of world-class tech talent and a lot of interesting cool people from all walks of life.
     
    The other option is to move out to the midwest somewhere and buy a house for cheap, and just chill out and work. I won't have to spend time fighting with insane traffic, trying to get groceries, and dodging hobo poop on the sidewalk.
     
    At the same time, Leadwerks has reached a level of success where I can easily get the attention of people in the game industry I previously did not have access to. The 10,000 user announcement (set to hit 20,000 this summer) was a big deal.
     
    Basically, I need to get relocated in the next two weeks, open a new office, and get back to work. I'm having a hard time deciding this.
  18. Josh
    It doesn't happen to me much anymore but when I get stuck on a difficult problem, I get very restless and short-tempered. I guess that's the point where most programmers give up on a project. Since I can't physically wrestle code to the ground and give it a good beating Billy Batts style, the fight or flight response probably is not conducive to sitting in front of a piece of glass and figuring out code. On the other hand, I have a tendency to get difficult things done, no matter how hard they are. Maybe a more patient programmer in the same situation would be too patient to produce useful output under his own volition.
     
    My understanding of how recast works is it turns upwards-facing triangles into a grid, then uses that grid to construct a navigation mesh. The results seem very robust, but the fundamental approach is an approximation. It is possible to construct a perfectly accurate navigation mesh out of CSG solids, because they have volumetric properties an arbitrary polygon soup does not. On the other hand, my instinct says people will prefer an approximation that works with any arbitrary geometry over a mathematically correct solution that requires constructive solid geometry.
     
    Another approach is to simply let the user construct the navigation mesh in the editor. Since the pieces need to be convex, a CSG editor is ideal for this. This also has the possibility of adding more advanced functionality like climbing up walls and jumping. (This is how Left 4 Dead 2 works.) Still, I know people will value a pretty-good solution that works with arbitrary polygon geometry more, and that gets into a land of approximations I don't really have any experience with. I suspect it would involve a lot of adjustments and fine-tuning to get the desired results.
     
    You can see here, the results are definitely an approximation, but they're a pretty good one:

     
    So here are the options I am looking at:
    1. Hire someone with existing knowledge of recast to write the implementation.
    2. Find someone with existing knowledge of recast and work off their knowledge to write the implementation myself.
    3. Stare at incomprehensible code for days on end and hope that somehow imparts knowledge of how to use the library. Maybe other people can understand this, but I am awful at deciphering other people's code.
    4. Write my own polygon-based solution.
    5. Write my own CSG-based solution.
     
    I think the CSG-based solution is the best technical choice, but I think it would cause a lot of complaints. It's fine for Valve, but I think a lot of people would just get mad if I tried to explain why arbitrary triangle meshes have no volumetric properties.
     
    Another frightening thing about recast is that from what I am reading, a lot of people are using the demo application as a tool to generate their navmesh data, and just loading the saved files that produces in their own game. That's completely unacceptable. We need to be able to generate this data in our own editor. I know it's possible, but the lack of people able to do this is an indication of the difficulty of the task.
     
    The pathfinding stuff is actually the last bit of research I have to complete before I know how everything in Leadwerks3D works. The rest is just a matter of hard work, but all the unknown factors we started with will be resolved.
     

  19. Josh
    Crowd navigation is built into Leadwerks3D. You just set a destination, and your characters will automatically travel wherever you tell them to go. I think having navigation built into the engine will give us better results than a third party add-on would. We can automatically recalculate sections of the map that change, AI script functions can be called when certain events occur, and the code to control characters is extremely simple.
     


  20. Josh
    Jorn Theunissen has been hired on a part-time basis to help design a new set of documentation for Leadwerks. We're taking a top-down approach whereby he designs an outline that encompasses everything you need to know, assuming the reader starts with zero knowledge. The goal is to explain everything in one set of information, down to the level of describing what a mipmap actually is.
     
    The organization of the material will be a linear list of lessons, with no nesting or sub-sections. This makes it easy to track your progress as you go through the material, and it's very clear where information is, with big thimbnails and descriptions for each chapter. Here is a mock-up of what the organization might look like:
     

     
    Benjamin Rössig has also been hired on a part-time basis to work on various features I don't have time to. He has contributed to Leadwerks in the past with the original program update system from Leadwerks 2, some of the Steamworks API calls, and wrote the Scintilla text editor. He is well-versed in C++, 3D graphics, and BlitzMax, which made this an easy choice.
     
    Plans for adding paid items to the Workshop are underway and I am working with about 20 third-party artists to bring their content to the Leadwerks Workshop. If you are interested in selling your game items in the Leadwerks Workshop and want to get in before the launch, please contact me.
     
    Leadwerks Game Player is in testing. There are various challenges as some of this technology is very new, but we'll keep iterating on this, improving it, and come up with a polished result for the end user. There is a major event coming in June that will give your Leadwerks Lua games major exposure. If you have a game you want to publish in the Game Player please contact me.
     
    Finally, Leadwerks 3.5 is schedule for release at the end of April. This will include CSG boolean operations, scene filters, groups, a new game project template, and possibly some features Benjamin is researching now.
  21. Josh
    Previously, I laid out a pretty complete design of the racing game template I want to build. There's definitely enough there to build out the concept with very little left unanswered.
     
    We are going to have a marble game template, because of its simplicity and ease of learning. However, unless the template really looks like a game people would want to play, it doesn't offer enough "carrot" to inspire people. This idea is less well-defined than the racing game, so I am only in the idea stage right now. There's two big aspects of this I want to figure out: level design and game mechanics.
    Level Design
    If you look at a lot of this type of game on Steam, they pretty much all just use some random blocks or generic levels and place a ball in it. This is what I do not want:
     

     
    A game of this type should have a highly ordered repeating geometry.
     

     

     
    These examples are more orthogonal than I would like, but it's a start. I definitely want the base of it to be a checkered pattern. I do not want random blocks. See how the Sonic level design uses some trim along the edges to make it look nicer?
     
    Here's an example of some more organic elements mixed into the design:
     

     
    Here's a nice curve. Not exactly what I want, but it provides some ideas:
     

     
    I like the curving plastic pieces here:
     

     
    It should look like a toy kit you put together. The wooden elements are also pretty cool.
     
    So here's my quick summary of what I want in the level design:
    Mostly orthogonal and diagonal brushes with a checkboard pattern. Lush vegetation to break up edges and make the world less boxy. Curved pre-formed pieces that look like plastic and wooden toys.  
    Overall I want the world to feel like an old Nintendo fantasy world with an adventurous feel. The music might not be 8-bit, but should have some of that style:
    http://www.playonloop.com/2017-music-loops/doctor-gadget/
     
    The marble itself should have a distinctive look. Maybe a reflective silver ball-bearing, using the SSG effect to reflect the world around it?
    Here are some concept textures.
    +




    Mechanics
    I am pretty foggy on this because I don't play this type of game. Fans? Moving platforms? Loop-de-loops? Got any suggestions or examples from other games you would like to see me implement? If you have a video of gameplay, that's even better.
    The idea is to implement a few reusable mechanics that can be recombined in interesting ways, so that someone can make a finished game just by recombining them and making additional levels. I also favor mechanics that are associated with a specific model so that they are easily recognizable, like fans, magnets, saws, etc.
    Tell me your ideas in the comments below.
  22. Josh
    This update brings the addition of lightmapping across curved surfaces with smooth groups. The image below is a set of lightmapped CSG brushes, not mesh lighting. You can read a detailed account of our implementation of this feature here.
     

     
    The project manager now includes an "Update" button, so you can easily update your project any time we modify files in the template folders. Although we tested with no problems, it is recommended you back up your project before using this feature. The editor will make a copy of any overwritten files, as an added precaution.
     

     
    You've now got full control over all the features and settings in the Leadwerks editor, through the new Options dialog. This gives you control over various program behaviors and settings. You can toggle grid snapping, control the snap angle, and lots of other stuff.
     

     
    We have made one change to the script system. Our multiple script design worked reasonably well during the development of Darkness Awaits. The flowgraph interactions were clean, but when it came to AI and player interaction, things got messy. For example, when the player pushes a button we perform a raycast or proximity test to get the entity hit. Then we have to go through all the scripts attached to the entity, looking for a relevant script attached to it:

    --Check if GoblinAI component is present if entity.script.GoblinAI~=nil then
     
    --Call the TakeDamage() function
    entity.script.GoblinAI:TakeDamage(10)
     
    end
     
    That works okay in our example, but when we consider other enemies we want to add, suddenly it gets ugly. We have a few choices.
    Add an if statement for every new AI script, checking to see if it is present.
     
    Separate the health value out into its own script.
     
    Loop through all attached scripts looking for a TakeDamage() function.

     
    It should be obvious why option #1 is a bad idea. This would make our code highly interdependent. Encapsulation is one of our goals in game scripting, so we can achieve drag and drop functionality (or as close to that as is reasonably achievable without limiting ourselves).
     
    The second option would solve our immediate problem, but this approach means that every single script variable two scripts access has to be a separate script. The thought of this just shuts my creativity down. I already think it's tedious to have to attach an AI and AnimationManager script to an entity, and can't imagine working with even more pieces.
     
    The third option is the most reasonable, but it greatly impedes our coding freedom. It means every time two entities interact, you would have to do something like this:

    --Check if GoblinAI component is present if entity.components~=nil then
     
    for local k,v in entity.components do
     
    if type(v.TakeDamage)=="function" do
     
    --Call the TakeDamage() function
    v:TakeDamage(10)
    end
    end
    end
     
    If you actually wanted to return a value, you would just have to get the first value and exit the loop:

    local enemyhealth=0  
    --Check if components table is present
    if entity.components~=nil then
     
    --Iterate through all attached components
    for local k,v in entity.components do
     
    if type(v.GetHealth)=="function" do
     
    --Call the GetHealth() function
    enemyhealth = v:GetHealth()
    break
    end
    end
    end
     
    This is a major problem, because it means there is no firm "health" value for an entity. Sure, we could consider it a "HealthManager.health" value but therein lies the issue. Instead of having plug-and-play scripts everyone can share, we have to devise a system of expected script names and variables. This breaks compartmentalization, which is what we were going for in the first place. Both Chris and I realized this approach was fundamentally wrong.
     
    After careful consideration, and based on our experience working on Darkness Awaits, we have restructured the system to work as follows, with a 1:1 entity:script relationship:

    --Check if script is present if entity.script~=nil then
     
    if type(entity.script.TakeDamage)=="function" do
     
    --Call the TakeDamage() function
    entity.script.TakeDamage(10)
    end
    end
     
    You can set an entity's script with the new function:

    Entity::SetScript(const std::string& path)
     
    You can also set and get entity script values right from C++:

    virtual void SetString(const std::string& name, const std::string& s); virtual void SetObject(const std::string& name, Object* o); virtual void SetFloat(const std::string& name, const float f);
     
    And it's easy to call a script function from your C++ code:

    virtual bool CallFunction(const std::string& name, Object* extra=NULL);
     
    The properties dialog is changed slightly, with a consistent script tab that always stays visible. Here you can set the script, and properties will appear below, instead of being spread across a bunch of different tabs:

     
    Maps with entities using only one script (which has been almost everything we see) are unaffected. Objects with multiple scripts need to have their code combined into one, or split across multiple entities. I am very reluctant to make changes to the way our system works. Our API has been very stable since day one of release, and I know it's important for people to have a solid foundation to build on. However, I also knew we made a design mistake, and it was better to correct it sooner rather than later.
     
    Probably the best aspect of the script system in Leadwerks 3 has been the flowgraph connections between scripted objects. That's been a big winner:
     

     
    As we use the editor more and hear about the community's experience, it allows us to refine our tools more. One of the more subtle but we made is in the properties editor. One of the annoyances I experienced was when setting a property like mass or position when an entire hierarchy of entities was selected. Obviously I didn't want to set the mass for every single bone in a character, but how to tell the editor that? I went through all the properties, and for ones that the user is unlikely to want set, I made the following rule: If the entity has a selected parent anywhere in the hierarchy, it gets ignored when properties are retrieved and applied. (Other things like color will still work uniformily.) Just try it, and you'll see. It speeds up editing dramatically.
     
    In a similar vein, we finally solved the problem of "too many selection widgets", We only display the top-most control widget for each group of selected objects. Again, it's easier for you to just try it and see than for me to explain in detail. If we did our job right, you might not even notice because the editor will just do what you want without any thought.
     
    You've also got control over the color scheme, and can use it to customize the look and feel of the 3D viewports and code editor.
     

     
    On Windows we fixed an annoyance that triggered menu shortcuts when the user tried to copy and paste in text fields. This tended to happen in the properties editor. We're working to resolve the same issue in the Cocoa UI for Mac.
     
    I'm a big fan of the 3ds max Max Script system, so in the output panel we've added a real-time Lua console:

     
    You can type Lua commands in and interact with Leadwerks in real-time. Just for fun, try pasting in this line of code and see what happens:

    for i=0,10 do a = Model:Box(); a:SetColor(math.random(0,1),math.random(0,1),math.random(0,1)); a:SetPosition(i*2,0,0); end  
    You can't create objects the editor will recognize (yet), but it's fun to play with and should give you an idea of where we're going with the editor.
     
    This is an important update that includes new features and enhancements that improve usability and workflow. Thank you for your feedback. It's great to see all the activity that is taking place as people learn Leadwerks 3.
  23. Josh
    I update to latest version of Newton and did a lot of work on character physics.
    All collisions below the step height will now be ignored, regardless of incline.
    Character collision in general is much more stable and accurate.
    Terrain collisions now work properly.
    Character collisions using an adaptive method that will be much faster overall.

     
    Opt into the beta branch on Steam to get the update. If you're using Lua, be sure to update your project to get the latest EXEs.
  24. Josh
    I've implemented DPI scaling into Leadwerks GUI (beta branch, Lua on Windows only).
     
    To set the GUI scale you just call gui:SetScale(scalefactor). A scale factor greater than one will make it bigger, a scale factor less than one will make it smaller. There is no limit to the range you can set, but negative numbers are probably not good.
     
    Scaling causes the GUI to be drawn at a different size, but widget dimensions and mouse events are still interpreted as though they were at their original size. For example, the mouse move script function will always indicate a coordinate of (510,510) when you move the mouse into the lower-left corner of a 500x500 widget positioned at 10,10, no matter how big or small the GUI is scaled onscreen. Widget::GetWidth() will return the widget's unscaled dimensions, not the actual size it appears at. To get the actual size, you would multiply the dimensions by the GUI scale factor and round off. However, you should never need to calculate this yourself, as all numbers are made uniform for you.
     
    Scaled widget positions are still rounded to integers in order to make them appears precisely on the screen pixels. We don't want sub-pixel blurring with widgets that line up in between pixels, we want sharp clear pixel-aligned rendering. Interestingly, the mouse script functions are now receiving floating point coordinates instead of integers. For example, if you move the mouse across a widget with the GUI scaled to 200% (2.0) you will see the mouse coordinates change in 0.5 increments like 150, 150.5, 151, 151.5, 152, etc.
     
    Scaling presently does not affect text rendering, as this will take more work to figure out how to manage.
     
    Below is a simple GUI scaled at 100%:

     
    Now at 200%. Note that text scaling is not yet implemented. When it is, the text will appear bigger, proportional to the widget. However lines still appear exactly one pixel thick, as they should:

     
    And finally made smaller, at 50%: Notice that although the lines are still one pixel thick, we have lost the space between the inner and outer rectangles. This is being drawn properly, but it is possible you might run into small issues like this in various resolutions, so it's something to keep in mind.

     
    Of course images cannot be upsampled without losing precision, so any images your GUI uses should originally be at the maximum scale your GUI may use.
×
×
  • Create New...