Jump to content

Josh

Staff
  • Posts

    23,597
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh

    Articles
    Autodesk 3ds Max now supports export of glTF models, as well as a new glTF material type. The process of setting up and exporting glTF models is pretty straightforward, but there are a couple of little details I wanted to point out to help prevent you from getting stuck. For this article, I will be working with the moss rocks 1 model pack from Polyhaven.
    Getting geometry into 3ds Max is simple enough. I imported the model as an FBX file.

    To set up the material, I opened the compact material editor and set the first slot to be a glTF material.

    Press the button for the base color map, and very importantly choose the General > Bitmap map type. Do not choose OSL > Bitmap Lookup or your textures won't export at all.

    Select your base color texture, then do the same thing with the normal and roughness maps, if you have them. 3ds Max treats metal / roughness as two separate textures, although you might be able to use the same texture both if it grabs the data from the green (roughness) and blue (metal) channels. This is something I don't know yet.

    Select the File > Export menu item to bring up the glTF export dialog. Uncheck the "Export glTF binary" option because we don't want to pack our model and textures into a single file: I don't know what the baked / original material option does because I don't see any difference when I use it.

    At this point you should have a glTF file that is visible in any glTF model viewer.

    Now something slightly weird max does is it generates some new textures for some of the maps. This is probably because it is combining different channels to produce final images. In this case, none of our textures need to be combined, so it is just a small annoyance. A .log file will be saved as well, but these can be safely deleted.

    You can leave the images as-is, or you can open up the glTF file in a text editor and manually change the image file names back to the original files:
    "images": [ { "uri": "rock_moss_set_01_nor_gl_4k.jpg" }, { "uri": "M_01___Defaultbasecolortexture.jpeg" }, { "uri": "M_01___Defaultmetallicroughnesstex.jpeg" } ], Finally, we're going to add LODs using the Mesh Editor > ProOptimizer modifier. I like these settings, but the most important thing is to make sure "Keep textures" is checked. You can press the F3 key at any time to toggle wireframe view and get a better view of what the optimizer does to your mesh.

    Export the file with the same name as the full-resolution model, and add "_lod1" to the end of the file name (before the extension). Then repeat this process saving lod2 and lod3 using 25% and 12.5% for the vertex reduction value in the ProOptimizer modifier.
    Here is my example model you can inspect:
    mossrock1a.zip
    Now it is very easy to get 3D models from 3ds max to your game engine.
  2. Josh
    Smoothing groups are one of the most frequently requested features for 3D World Studio, so I am happy to show you their implementation in Leadwerks3D, finally. Smoothing groups are usually stored as a bitwise flag. When two faces share a vertex at a certain position, if they have one or more smoothing groups in common, their vertex normal is calculated as if they are sharing a vertex.
     
    I first attempted to write an algorithm based on edges, which worked great for cylinders but failed for geometry that had faces that influence vertex normals they are not connected to by an edge. I wrote another routine, which uses an n*n*n loop (big programming no-no, but sometimes it's necessary) and got good results with cylinders and spheres. Cones are a difficult case, and if you don't believe me, just crack out a cone in your favorite 3D modeling app and inspect the tip. I got around this problem by adding a max smoothing angle member to the brush class, which allows the smoothing algorithm to discard faces that are too 'different' from the one being processed.
     
    Here's the result:

     
    In order to make shapes like arches appear smoothed, the smoothing algorithm has to span multiple objects. This will be accomplished by first grabbing all the brushes in the nearby vicinity to the brush being processed, and then searching for vertices that occupy the same position, on faces that have a smoothing group in common.
     
    In the past, we've seen some pretty advanced modeling done with 3D World Studio, but the lack of smoothing groups always limited it. It will be awesome to see what people come up with, when those limits are removed and they can model exactly what they want.





  3. Josh
    The new docs system is mostly working now:
    https://www.leadwerks.com/learn?page=API-Reference_Object_Entity_SetPosition
     
    Features:
    Treeview list of all tutorials and API classes and functions.
    Alphabetical index (generated automatically from the table of contents).
    Search (with autogenerated search index).
    Switch back and forth between languages, with cookies to remember your preference.
    Entire examples are automatically selected when you click on them.

    Todo:
    Table of contents doesn't yet navigate.
    I would like to pretty-up the URLs using .htaccess rewrite rules.

     
    Documentation is being loaded from XML files. I find this easier than a database, which is quite opaque to me. You can access some of the XML files being used here. MartyJ even made a cool template for the data on his site here.
     
    At this point I think I can outsource the actual content creation to Upwork. This will involve copying and formatting our existing documentation into XML files, as well as updating all the examples to work by just using a main() function instead of the old App() class that we don't really need. I will be posting the job on that site soon.
  4. Josh
    This is just a little walk down memory lane that pleasantly shows what has led to where we are today. Some of this predates the name "Leadwerks" entirely.
     
    Cartography Shop (2003)

     
    3D World Studio (2004)

     
    Leadwerks Game Engine 2 (2008)

     
    Leadwerks Game Engine 3 (2013)

  5. Josh
    How do you like that clickbait title?
     
    I implemented analytics into Leadwerks Editor using GameAnalytics.com last month, in order to answer questions I had about user activity. Here's what I have learned.
     
    The number of active users is what I was hoping for. A LOT of people ran Leadwerks in the last month, but people don't usually use it every day, as the number of daily users is lower. The numbers we have are good though.
     
    A lot of people use the Workshop to install models and other items for their projects. Unfortunately, paid Workshop items cannot be purchased right now, as I am working with Valve to change some company financial information. i will have this running again as soon as possible.
     
    Boxes are the most common primitive created, by a huge margin, followed by cylinders, wedges, spheres, and cones, in that order. Maybe the list of available primitives should be rearranged based on this? Not a big deal, but it's interesting to see.
     
    There's definitely a disconnect between user activity and the community, so in the future I hope to encourage people using the program to register forum accounts and become active in the community.
     
    Overall, I am glad that analytics have allowed me to get a broad picture of collective user behavior so I am no longer working blindly. It's a blunt instrument, but it will be interesting to see how I can use it in the future to improve the user experience.
  6. Josh
    Leadwerks3D has built-in project management features that assist the user to create, share, and publish projects. When you first start editor for the first time, the New Project wizard is displayed. You can select a programming language and choose which platforms you want the project to support. It is possible to add project templates for new languages, too.

     
    Once a project exists, you can go into the project manager and switch projects. This will cause the editor to use the project directory as the path to load all assets from. All files will be displayed relative to the project directory. You can drag an image into the icon box to add it to the project. (Xcode users will probably notice the influence here.) The +/- buttons will allow you to import a project or remove one from the list, with an option to delete the directory (iTunes much?).

     
    I want to add more features like exporting projects into a folder or .zip file, or maybe a .werk project package the editor recognizes. The editor will have options to skip asset source files like .psd, .max, etc., and only copies relevant files, so all the hundreds of files the C++ compilers generate will be skipped. We might eventually even add integration with SVN repositories, backup systems, and team management features, but that's beyond what we need to worry about right now.
     
    A "project" in Leadwerks3D is a game or other program, and will typically contain many maps (scenes). Since this kind of defies a conventional design around file requestors and files and folders, we might not even use a traditional "Open Map" dialog. All the project's maps will be available in the "Maps" directory in the Asset Browser. I can even save a thumbnail of each map when the file is saved. To add a new map to a project, you would simply drag it into the Asset Browser, preferably in the "Maps" directory. Or maybe we will also include an "Open File" file requestor that can be used to import any map, material, texture, etc. into the project. The current design of the editor is the result of a lot of experimentation and testing, so I'll continue to play around with it as I add features.
     
    Michael Betke and I have traded a lot of projects back and forth, and this system is designed to make that task easier so we can all exchange self-contained projects. If you have any suggestions to add, post in the comments below, or in the feature requests forum.
     
    --Update--
    And here's the export screen. This is similar to the "publish" step. I need to write a parser that goes through source code to determine what files are actually needed in the project. The project template defines what file extensions to scan for used files:

  7. Josh
    Step 1. Add your image files into the project directory. I just added a bunch of TGA files, and they show up in the editor right away:

     
    Step 2. Right-click on a texture and select the "Generate Material" menu item.

     
    Step 3. There is no step 3. Your material is ready to use. The normal map was automatically detected and added to the material.

     
    Here it is in the material editor.

  8. Josh
    Ladies and gentlemen, come one, come all, to feast your eyes on wondrous sights and behold amazing feats! It's "Cirque des Jeux", the next Leadwerks Game Tournament!

    How does it work?  For one month, the Leadwerks community builds small playable games.  Some people work alone and some team up with others.  At the end of the month we release our projects to the public and play each other's games.  The point is to release something short and sweet with a constrained timeline, which has resulted in many odd and wonderful mini games for the community to play.
    WHEN: The tournament begins Thursday, February 1, and ends on Wednesday, February 28th at the stroke of midnight.
    HOW TO PARTICIPATE: Publish your Circus-or-other-themed game to the Games Showcase before the deadline. You can work as a team or individually. Use blogs to share your work and get feedback as you build your game.
    Games must have a preview image, title, and contain some minimal amount of gameplay (there has to be some way to win the game) to be considered entries. It is expected that most entries will be simple, given the time constraints.
    This is the perfect time to try making a VR game or finish that idea you've been waiting to make!
    PRIZES: All participants will receive a limited-edition 11x17" poster commemorating the event. To receive your prize you need to fill in your name, mailing address, and phone number (for customs) in your account info.
    At the beginning of March we will post a roundup blog featuring your entries. Let the show begin!
  9. Josh
    Vulkan gives us explicit control over the way data is handled in system and video memory. You can map a buffer into system memory, modify it, and then unmap it (giving it back to the GPU) but it is very slow to have a buffer that both the GPU and CPU can access. Instead, you can create a staging buffer that only the CPU can access, then use that to copy data into another buffer that can only be read by the GPU. Because the GPU buffer may be in-use at the time you want to copy data to it, it is best to insert the copy operation into a command buffer, so it happens after the previous frame is rendered. To handle this, we have a pool of transfer buffers which are retrieved by a command buffer when needed, then released back into the pool once that command buffer is finished drawing. A fence is used to tell when the command buffer completes its operations.
    One issue we came across with OpenGL in Leadwerks was when data was uploaded to the GPU while it was still being accessed to render a frame. You could actually see this on some cards when playing my Asteroids3D game. There was no mechanism in OpenGL to synchronize memory, so the best you could do was put data transfers at the start of your rendering code, and hope that there was enough of a delay before your drawing actually started that the memory copying had completed. With the super low-overhead approach of Vulkan rendering, this problem would become much worse. To deal with this, Vulkan uses explicit memory management with something called pipeline barriers. When you add a command into a Vulkan command buffer, there is no guarantee what order those commands will be executed in, and pipeline barriers allow you to create a point where certain commands must be executed before other ones can begin.
    Here are the order of operations:
    Start recording new command buffer. Retrieve staging buffer from pool and remove from pool. Copy data into staging buffer. Insert command to copy from staging buffer to the GPU buffer. Insert pipeline barrier to make sure data is transferred before drawing begins. Execute the command buffer. When the fence is completed, move all staging buffers back into the staging buffer pool. In the new game engine, we have several large buffers to store the following data:
    Mesh vertices Mesh indices Entity 4x4 matrices (and other info) A list of visible entity IDs Visible light information. Skeleton animation data I found this data tends to fall into two categories.
    Some data is large and only some of it gets updated each frame. This includes entity 4x4 matrices, skeleton animation data, and mesh vertex and index data. Other data tends to be smaller and only concerns visible objects. This includes visible entity IDs and light information. This data is updated completely each time a new visibility set arrives. The first type of data requires data buffers that can be resized, because they can be very large, and more objects or data might be added at any time. For example, the vertex buffer contains all vertices that exist, in all meshes the user creates or loads. If a new mesh is loaded that requires space greater than the buffer capacity, a new buffer must be created, then the full contents of the old buffer are copied over, directly in GPU memory. A new pipeline barrier is inserted to ensure the data transfer to the new buffer is finished, and then additional data is copied.
    The second type of data is a bit simpler. If the existing buffer is not big enough, a new bigger buffer is created. Since the entire contents of the buffer are uploaded with each new visibility set, there is no need to copy any existing data from the old buffer.
    I currently have about 2500 lines of Vulkan-specific code. Calling this "boilerplate" is disingenuous, because it is really specific to the way you set your renderer up, but the core mesh rendering system I first implemented in OpenGL is working and I will soon begin adding support for textures.
     
  10. Josh
    The Industrial Cargo Model Pack is now available for only $9.95. This package includes three cargo containers and two wooden spools. All assets are certified game-ready for Leadwerks Engine. This is just some of Dave Lee's work, and there's lots more coming!
     

  11. Josh
    This is a quick blog to fill you in on some of the features planned for Leadwerks Engine 3.0.
     
    -Any entity can be a physics body. The body command set will become general entity commands, i.e. Entity.SetVelocity(). You can make an entity physically interactive with Entity.SetShape( shape ).
     
    -Any entity can have any number of scripts attached to it. The basic model script functions will be supported for all entities, plus specialized script functions for certain entities. For example, particle emitters will have an optional script function that is called when a particle is reset. This could be used to make a fire emitter "burn" a mesh. Script functions are called in sequence, so you can attach a chain of scripts to an entity to combine behaviors.
     
    -Prefabs will be supported, with the ability to drag entities together to make a hierarchy, add lights, emitters, and other entities, attach scripts, and then save a prefab.
     
    -Record screenshots and movies from the editor!
     
    -Tons of scripted behavior you can attach to your own models, and lots of scripted assets you can download. Basic AI features will be supplied.
     
    -The editor is being designed so you only need a mouse to make games. If you want to script or program or make artwork you can, but at the simplest level, the whole workflow can be controlled in the editor, with reloading assets, automated imports, and lots of drag-and-drop functionality.
     
     
    That's all for now. I can't tell you some of the coolest features, but this gives you an idea of where we are headed.
  12. Josh
    Now that we can voxelize models, enter them into a scene voxel tree structure, and perform raycasts we can finally start calculating direct lighting. I implemented support for directional and point lights, and I will come back and add spotlights later. Here we see a shadow cast from a single directional light:

    And here are two point lights, one red and one green. Notice the distance falloff creates a color gradient across the floor:

    The idea here is to first calculate direct lighting using raycasts between the light position and each voxel:

    Then once you have the direct lighting, you can calculate approximate global illumination by gathering a cone of samples for each voxel, which illuminates voxels not directly visible to the light source:

    And if we repeat this process we can simulate a second bounce, which really fills in all the hidden surfaces:

    When we convert model geometry to voxels, one of the important pieces of information we lose are normals. Without normals it is difficult to calculate damping for the direct illumination calculation. It is easy to check surrounding voxels and determine that a voxel is embedded in a floor or something, but what do we do in the situation below?

    The thin wall of three voxels is illuminated, which will leak light into the enclosed room. This is not good:

    My solution is to calculate and store lighting for each face of each voxel.
    Vec3 normal[6] = { Vec3(-1, 0, 0), Vec3(1, 0, 0), Vec3(0, -1, 0), Vec3(0, 1, 0), Vec3(0, 0, -1), Vec3(0, 0, 1) }; for (int i = 0; i < 6; ++i) { float damping = max(0.0f,normal[i].Dot(lightdir)); //normal damping if (!isdirlight) damping *= 1.0f - min(p0.DistanceToPoint(lightpos) / light->range[1], 1.0f); //distance damping voxel->directlighting[i] += light->color[0] * damping; } This gives us lighting that looks more like the diagram below:

    When light samples are read, the appropriate face will be chosen and read from. In the final scene lighting on the GPU, I expect to be able to use the triangle normal to determine how much influence each sample should have. I think it will look something like this in the shader:
    vec4 lighting = vec4(0.0f); lighting += max(0.0f, dot(trinormal, vec3(-1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(0.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(1.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, -1.0f, 0.0f)) * texture(gimap, texcoord + vec2(2.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 1.0f, 0.0f)) * texture(gimap, texcoord + vec2(3.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, -1.0f)) * texture(gimap, texcoord + vec2(4.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, 1.0f)) * texture(gimap, texcoord + vec2(5.0 / texwidth, 0.0)); This means that to store a 256 x 256 x 256 grid of voxels we actually need a 3D RGB texture with dimensions of 256 x 256 x 1536. This is 288 megabytes. However, with DXT1 compression I estimate that number will drop to about 64 megabytes, meaning we could have eight voxel maps cascading out around the player and still only use about 512 megabytes of video memory. This is where those new 16-core CPUs will really come in handy!
    I added the lighting calculation for the normal Vec3(0,1,0) into the visual representation of our voxels and lowered the resolution. Although this is still just direct lighting it is starting to look interesting:

    The last step is to downsample the direct lighting to create what is basically a mipmap. We do this by taking the average values of each voxel node's children:
    void VoxelTree::BuildMipmaps() { if (level == 0) return; int contribs[6] = { 0 }; for (int i = 0; i < 6; ++i) { directlighting[i] = Vec4(0); } for (int ix = 0; ix < 2; ++ix) { for (int iy = 0; iy < 2; ++iy) { for (int iz = 0; iz < 2; ++iz) { if (kids[ix][iy][iz] != nullptr) { kids[ix][iy][iz]->BuildMipmaps(); for (int n = 0; n < 6; ++n) { directlighting[n] += kids[ix][iy][iz]->directlighting[n]; contribs[n]++; } } } } } for (int i = 0; i < 6; ++i) { if (contribs[i] > 0) directlighting[i] /= float(contribs[i]); } } If we start with direct lighting that looks like the image below:

    When we downsample it one level, the result will look something like this (not exactly, but you get the idea):

    Next we will begin experimenting with light bounces and global illumination using a technique called cone tracing.
  13. Josh
    The environment probe feature is coming along nicely, spawning a whole new system I am tentatively calling "Deferred Image-Based Lighting". This adds image-based lighting to your scenes that accentuates the existing deferred renderer and makes for higher quality screenshots with lots of next-gen detail. The probes handle both image-based ambient lighting and reflections.
     
    Shadmar helped by implementing parallax cubemap correction. This provides an approximate volume the cubemap is rendered to and gives us a closer approximation of reflections. You can see a good side-by-side comparison of it vs. conventional cubemaps in Half-Life 2
    . It works best when the volume of the room is defined by the shape of the environment probe, so I changed probes to use a box for rendering instead of a sphere. The box is controlled by the object's scale, just like with decals, although I wonder if there would be a way to build these out of brushes in the future, since our CSG building tools are so convenient. In the screenshot below you can see I have added two environment probes to a map, one filling the volume of each room. Placing probes does require some amount of artistry as it isn't a 100% perfect system, but with a little experimentation it's easy to get really beautiful results. 

     
    Once probes are placed you will select a menu item to build global illumination. GI will be built for all probes and the results are instantly visible. The shot below is only using an environment probe for lighting, together with an SSAO post-processing effect. Because the ambient lighting and reflections are image-based, any bright surface can act as a light even though it isn't really.
     

     
    When combined with our direct deferred lighting, shadowed areas become much more complex and interesting. Note the light bouncing off the floor and illuminating the ceiling.
     

     
    The new feature works seamlessly with all existing materials and shaders, since it is rendered in a deferred step. It can be combined with a conventional ambient light level, or you can set the ambient light to black and just rely on probes to provide indirect lighting.
  14. Josh
    I have basic point lights working in the Vulkan renderer now. There are no shadows or any type of reflections yet. I need to work out how to set up a depth pre-pass. In OpenGL this is very simple, but in Vulkan it requires another complicated mess of code. Once I do that, I can add in other light types (spot, box, and directional) and pull in the PBR lighting shader code. Then I will add support for a cubemap skybox and reflections, and then I will upload another update to the beta.

    Shadows will use variance shadow maps by default. With these, all objects must cast a shadow, but our renderer is so fast that this is not a problem. I've had very good results with these in earlier experiments.
    I then want to complete my work on voxel-based global illumination and reflections. I looked into Nvidia RTX ray tracing but the performance is awful even with a GEForce 2080. My voxel approach should provide good results with fast performance.
    Once these features are in place, I may release the new engine on Steam as a programming SDK, until the new editor is ready.
  15. Josh
    I got my hands on the GDC showcase scene by Michael Betke, and it's much more beautiful than I thought. I used this scene to rewrite and test the vegetation rendering code. Collision isn't working yet, but it's built into the routine. The results speak for themselves.
     
    Fortunately, Pure3D is soon offering a set of game-ready vegetation models, so you can create gorgeous landscapes like this.
  16. Josh
    Looking back at a recent blog, I talked briefly about my plans for 2016: My goals were the following:
    Paid Workshop items
    Release Game Launcher on Steam proper.
    More features.

     
    These goals are actually linked together. Let's focus on the amount and quality of games being released for the game launcher. Right now, we have a lot of variety of simple games, and some that are very fun, but we don't have any must-play hits yet. As long as the reviews look like this, the game launcher isn't ready to release.
     
    What can we do to facilitate better and more complex games? From what I have seen, reusable scripts have given Leadwerks users a huge boost in productivity, especially when combined with a 3D model. For example, the zombie DLC, FPS weapons, first-person player, and other reusable items have gotten a ton of use and allowed creation of many different games. Continuing to build a deeper more robust script environment will allow developers to easily set up more advanced gameplay. The SoldierAI and the way it breaks down the bullets into a separate script are a good example of this direction. For example, the same projectile script can be used for a turret. Our design of self-contained Lua scripts with inputs and outputs will prevent the system from getting overly complicated, as a complex C++ hierarchy of classes would.
     
    In the future, I think releasing more game-ready items in the Workshop, first with official Leadwerks scripts, and then with third-party scripts, will allow us to leverage the design of Leadwerks Game Engine and the tools like the flowgraph.
     
    Features I implement next are going to be specifically chosen because of their capacity for increasing the gameplay potential of Leadwerks games. Easy networking and a GUI come to mind.
     
    When I feel like we are really hitting our stride in terms of the games you can make with Leadwerks, that is when game launcher will be released. Leadwerks Game Engine is all about what the user can make.
  17. Josh
    A small update has been published to the default branch of Leadwerks Game Engine on Steam. This updates the FBX converter so that the scene units (meters, feet, etc.) are read from the file and used to convert the model at the proper size. Previously, the raw numbers for position, scale, and vertex positions were read in as meters. The new importer also supports both smooth groups and per-vertex normals, so models can be imported more reliably without having to recalculate normals.

    An error in the probe shader that only occurred when the shader was compiled for VR mode has also been fixed.
  18. Josh
    There are three low-level advancements I would like to make to Leadwerks Game Engine in the future:
    Move Leadwerks over to the new Vulkan graphics API.
    Replace Windows API and GTK with our own custom UI. This will closely resemble the Windows GUI but allow new UI features that are presently impossible, and give us more independence from each operating system.
    Compile the editor with BlitzMaxNG. This is a BMX-to-C++ translator that allows compilation with GCC (or VS, I suppose). This would allow the editor to be built in 64-bit mode.

     
    None of these enhancements will result in more or better games, and thus do not support our overarching goal. They also will each involve a significant amount of backtracking. For example, a new Vulkan renderer is pretty much guaranteed to be slower than our existing OpenGL renderer, for the first six months, and it won't even run on a Mac. A new GUI will involve lots of bugs that set us back.
     
    This is likely to be stuff that I just explore slowly in the background. I'm not going to take the next three months to replace our renderer with Vulkan. But this is the direction I want to move in.
  19. Josh
    This is something I typed up for some colleagues and I thought it might be useful info for C++ programmers.
    To create an object:
    shared_ptr<TypeID> type = make_shared<TypeID>(constructor args…) This is pretty verbose, so I always do this:
    auto type = make_shared<TypeID>(constructor args…) When all references to the shared pointer are gone, the object is instantly deleted. There’s no garbage collection pauses, and deletion is always instant:
    auto thing = make_shared<Thing>(); auto second_ref = thing; thing = NULL; second_ref = NULL;//poof! Shared pointers are fast and thread-safe. (Don’t ask me how.)
    To get a shared pointer within an object’s method, you need to derive the class from “enable_shared_from_this<SharedObject>”. (You can inherit a class from multiple types, remember):
    class SharedObject : public enable_shared_from_this<SharedObject> And you can implement a Self() method like so, if you want:
    shared_ptr<SharedObject> SharedObject::Self() { return shared_from_this(); } Casting a type is done like this:
    auto bird = dynamic_pointer_cast<Bird>(animal); Dynamic pointer casts will return NULL if the animal is not a bird. Static pointer casts don’t have any checks and are a little faster I guess, but there’s no reason to ever use them.
    You cannot call shared_from_this() in the constructor, because the shared pointer does not exist yet, and you cannot call it in the destructor, because the shared pointer is already gone!
    Weak pointers can be used to store a value, but will not prevent the object from being deleted:
    auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //creates a new shared pointer to “thing” auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; thing = NULL; shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //returns NULL! If you want to set a weak pointer’s value to NULL without the object going out of scope, just call reset():
    auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; thingwptr.reset(); shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //returns NULL! Because no garbage collection is used, circular references can occur, but they are rare:
    auto son = make_shared<Child>(); auto daughter = make_shared<Child>(); son->sister = daughter; daughter->brother = son; son = NULL; daughter = NULL;//nothing is deleted! The problem above can be solved by making the sister and brother members weak pointers instead of shared pointers, thus removing the circular references.
    That’s all you need to know!
  20. Josh
    A new build is available on the beta branch with the following changes:
    Fixed sidepanel tab switching bug.
    Improved compatibility with intel graphics.
    Fixes instance rendering bug with instance count over 256.
    Added terrain hardware tessellation.

     
    To enable hardware tessellation, select the Tools > Options menu item, select the Video tab, and set the tessellation value. Terrain layers much have a displacement map to display displacement mapping. Each terrain layer has an adjustable displacement value to control the strength of the effect. Displacement maps look best when they are rendered from a high-polygon sculpt. The terrain wireframe shader has alsod been updated to visualize the tessellation in wireframe view. You must update your project to get the new versions of the terrain shaders.
     
    At this time, terrain will always use tessellation when it is enabled, even if no texture layers contain a displacement map. This will be changed in the future.
  21. Josh
    So far the new Voxel ray tracing system I am working out is producing amazing results. I expect the end result will look like Minecraft RTX, but without the enormous performance penalty of RTX ray tracing.
    I spent the last several days getting the voxel update speed fast enough to handle dynamic reflections, but the more I dig into this the more complicated it becomes. Things like a door sliding open are fine, but small objects moving quickly can be a problem. The worst case scenario is when the player is carrying an object in front of them. In the video below, the update speed is fast, but the limited resolution of the voxel grid makes the reflections flash quite a lot. This is due to the reflection of the barrel itself. The gun does not contribute to the voxel data, and it looks perfectly fine as it moves around the scene, aside from the choppy reflection of the barrel in motion.
    The voxel resolution in the above video is set to about 6 centimeters. I don't see increasing the resolution as an option that will go very far. I think what is needed is a separation of dynamic and static objects. A sparse voxel octree will hold all static objects. This needs to be precompiled and it cannot change, but it will handle a large amount of geometry with low memory usage. For dynamic objects, I think a per-object voxel grid should be used. The voxel grid will move with the object, so reflections of moving objects will update instantaneously, eliminating the problem we see above.
    We are close to having a very good 1.0 version of this system, and I may wrap this up soon, with the current limitations. You can disable GI reflections on a per-object basis, which is what I would recommend doing with dynamic objects like the barrels above. The GI and reflections are still dynamic and will adjust to changes in the environment, like doors opening and closing, elevators moving, and lights moving and turning on and off. (If those barrels above weren't moving, showing their reflections would be absolutely no problem, as I have demonstrated in previous videos.)
    In general, I think ray tracing is going to be a feature you can take advantage of to make your games look incredible, but it is something you have to tune. The whole "Hey Josh I created this one weird situation just to cause problems and now I expect you to account for this scenario AAA developers would purposefully avoid" approach will not work with ray tracing. At least not in the 1.0 release. You're going to want to avoid the bad situations that can arise, but they are pretty easy to prevent. Perhaps I can combine screen-space reflections with voxels for reflections of dynamic objects before the first release.
    If you are smart about it, I expect your games will look like this:
    I had some luck with real-time compression of the voxel data into BC3 (DXT5) format. It adds some delay to the updating, but if we are not trying to show moving reflections much then that might be a good tradeoff. Having only 25% of the data being sent to the GPU each frame is good for performance.
    Another change I am going to make it a system that triggers voxel refreshes, instead of constantly updating it no matter what. If you sit still and nothing is moving, then the voxel data won't get recalculated and processed, which will make the performance even faster. This makes sense if we expect most of the data to not change each frame.
    I haven't run any performance benchmarks yet, but from what I am seeing I think the performance penalty for using this system will be basically zero, even on integrated graphics. Considering what a dramatic upgrade in visuals this provides, that is very impressive.
    In the future, I think I will be able to account for motion in voxel ray tracing, as well as high-definition polygon raytracing for sharp reflections, but it's not worth delaying the release of the engine. Hopefully in this article I showed there are many factors, and many approaches we are can use to try to optimize for different aspects of the effect. For the 1.0 release of our new engine, I think we want to emphasize performance above all else.
  22. Josh

    Articles
    I recently fired up an install of Ubuntu 20.04 to see what the state of development on Linux is now, and it looks like things have improved dramatically since I first started working with Linux in 2013. Visual Studio Code looks and works great on Linux, and I was able to load the Ultra Engine source code and start compiling, although there is still much work to do. The debugger is fantastic, especially after using the Code::Blocks debugger on Linux, which was absolutely sadistic. (They're both using GDB under the hood but the interface on Code::Blocks was basically unusable.) Regardless of whatever their motivation was, Microsoft (or rather, the developers of the Atom editor VS Code was based on) has made a huge contribution to usability on the Linux desktop. 

    Settings up C++ compilation in Visual Studio Code is rather painful. You have to edit a lot of configuration files by hand to specify the exact command line GCC should use, but it is possible:
    { "tasks": [ { "type": "cppbuild", "label": "C/C++: g++ build active file", "command": "/usr/bin/g++", "args": [ "-g", "-D_ULTRA_APPKIT", "./Libraries/Plugin SDK/GMFSDK.cpp", "./Libraries/Plugin SDK/MemReader.cpp", "./Libraries/Plugin SDK/MemWriter.cpp", "./Libraries/Plugin SDK/TextureInfo.cpp", "./Libraries/Plugin SDK/Utilities.cpp", "./Libraries/Plugin SDK/half/half.cpp", "./Libraries/freeprocess/freeprocess.c", "./Libraries/s3tc-dxt-decompressionr/s3tc.cpp", "./Libraries/stb_dxt/stb_dxt.cpp", "./Classes/Object.cpp", "./Classes/Math/Math_.cpp", "./Classes/Math/Vec2.cpp", "./Classes/Math/Vec3.cpp", "./Classes/Math/Vec4.cpp", "./Classes/Math/iVec2.cpp", "./Classes/Math/iVec3.cpp", "./Classes/Math/iVec4.cpp", "./Classes/String.cpp", "./Classes/WString.cpp", "./Classes/Display.cpp", "./Classes/IDSystem.cpp", "./Classes/JSON.cpp", "./Functions.cpp", "./Classes/GUI/Event.cpp", "./Classes/GUI/EventQueue.cpp", "./Classes/Language.cpp", "./Classes/FileSystem/Stream.cpp", "./Classes/FileSystem/BufferStream.cpp", "./Classes/FileSystem/FileSystemWatcher.cpp", "./Classes/GameEngine.cpp", "./Classes/Clock.cpp", "./Classes/Buffer.cpp", "./Classes/BufferPool.cpp", "./Classes/GUI/Interface.cpp", "./Classes/GUI/Widget.cpp", "./Classes/GUI/Panel.cpp", "./Classes/GUI/Slider.cpp", "./Classes/GUI/Label.cpp", "./Classes/GUI/Button.cpp", "./Classes/GUI/TextField.cpp", "./Classes/GUI/TreeView.cpp", "./Classes/GUI/TextArea.cpp", "./Classes/GUI/Tabber.cpp", "./Classes/GUI/ListBox.cpp", "./Classes/GUI/ProgressBar.cpp", "./Classes/GUI/ComboBox.cpp", "./Classes/GUI/Menu.cpp", "./Classes/Window/LinuxWindow.cpp", "./Classes/Timer.cpp", "./Classes/Process.cpp", "./Classes/FileSystem/StreamBuffer.cpp", "./Classes/Multithreading/Thread.cpp", "./Classes/Multithreading/Mutex.cpp", "./Classes/Multithreading/ThreadManager.cpp", "./Classes/Loaders/Loader.cpp", "./Classes/Loaders/DDSTextureLoader.cpp", "./Classes/Assets/Asset.cpp", "./Classes/Plugin.cpp", "./Classes/Assets/Font.cpp", "./Classes/FileSystem/Package.cpp", "./Classes/Graphics/Pixmap.cpp", "./Classes/Graphics/Icon.cpp", "./AppKit.cpp" ], "options": { "cwd": "${workspaceFolder}" }, "problemMatcher": [ "$gcc" ], "group": { "kind": "build", "isDefault": true }, "detail": "Task generated by Debugger." } ], "version": "2.0.0" } Our old friend Steam of course looks great on Linux and runs perfectly.

    But the most amazing thing about my install of Ubuntu 20.04 is that it's running on the Windows 10 desktop:

    Hyper-V makes it easy to create a virtual machine and run Linux and Windows at the same time. Here are some tips I have found while working with it:
    First, make sure you are allocating enough hard drive space and memory, because you are unlikely to be able to change these later on. I tried expanding the virtual hard drive, and after that Ubuntu would not load in the virtual machine, so just consider these settings to be locked. I gave my new VM 256 GB hard drive space and used the dynamic memory setting to allocate memory as-needed.
    Second, by default Ubuntu is only going to give you a 1024x768 window. You can change this but it requires a little work. Open the terminal in Ubuntu and type this command:
    sudo nautilus /etc/default/grub This will open the file browser and select the GRUB file, which stores system settings. Open the file with a text editor. Since the file was opened from a nautilus window with super-user permissions, the text editor will also be launched with permissions.
    Find this line of text:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" Change it to this. You can set the screen resolution to whatever value you prefer:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=hyperv_fb:1920x1080" Close the text editor and the nautilus window. In the terminal, run this command:
    sudo update-grub Restart the virtual machine and your new screen resolution will work. However, the DPI scaling in Ubuntu seems to be not very reliable. My laptop uses a 1920x1080 screen, and at 17" this really needs to be scaled up to 125%. I could not get "fractional scaling" to work. The solution is to set your Windows desktop resolution to whatever value feels comfortable at 100% scaling. In my case, this was 1600x900. Then set Ubuntu's screen resolution to the same. When the window is maximized you will have a screen that is scaled the way you want.

    Checkpoints are a wonderful feature that allow you to easily walk back any changes you make to the virtual machine. You can see the steps I went through on to install all the software I wanted to work on Linux:

    If I ever mess anything up, I can easily revert back to whatever point I want. You can probably transfer the virtual machine between computers very easily, too, although I have not tried this yet. Changing the VM disk size, however, requires that you delete all checkpoints. So like I said, don't do that. My disk size is set to 256 GB, but in reality the whole VM is only taking up about 25 GB on my real hard drive.
    The "type clipboard text" feature in Hyper-V does not work at all with Linux, so don't even try.
    Pro tip: If you ever get stuck in the virtual machine, you can press Ctrl + Alt + Left arrow to bring the focus back to the Windows desktop.
    Hyper-V isn't going to allow you to play games with Vulkan or OpenGL, but for other software development it seems to work very well. It's a bit slow, but the convenience of running Linux on the Windows desktop and being able to switch back and forth instantly more than makes up for it.
    Super Bonus Tip
    I found that regardless of how much disk space you allocate for the virtual machine, the Windows default install of Ubuntu 20.04 will still only use 12 GB on the main drive partition. If you try to resize this in the built-in tools, an error will occur, something like "Failed to set partition size on device. Unable to satisfy all constraints on the partition."
    To resize the partition you must install gparted:
    sudo apt install gparted When you run the application it will look like this. Right-click on the ext4 partition and select the Resize/Move menu item:

    Set the new partition size and press the Resize button. 
    Extreme Double Bonus Tip
    If at any time you are running gparted you see an error like "Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space" you should press the Fix button. If you don't do this, the partition resize won't work and will display some errors.
    After the partition resize, the VM files on Windows are still only using about 21 GB, so it looks like Hyper-V doesn't allocate more disk space than it has to.
    Crazy Extra Tip
    Both VS Code and Brave browser (and probably Chrome) have an option to use a much better looking "custom titlebar" instead of the ugly GTK+ titlebar buttons.
  23. Josh
    An HTML renderer for Linux has been implemented. This adds the welcome window and full Workshop inteface for Linux. This will be available soon for testing on the beta branch.
     

     

     

  24. Josh
    A new update is available on the beta branch.
     
    The FPS player script has been fixed to allow throwing of objects when he is not holding a weapon.
     
    The script flowgraph system's behavior with prefabs has been changed. Script connections between prefabs will no longer be locked in the prefab file, and can be made in the map itself. The map version has been incremented, and maps saved with the new format will not load until you update or recompile your game. The changes in this system may have introduced some problems so please notify me of any maps that fail to load properly.
     
    You must update your project to get these changes. See the dialog when you select the File > Project Manager menu item. (Any files that are overwritten will have a backup copy made in the same directory.)
     
    You should only opt into the beta if you are comfortable using a potentially less stable version of the software that is still being tested.
  25. Josh
    I've added a textfield widget script to the beta branch, and a new build, for (Lua interpreter, Windows only, at this time). The textfield widget allows editing of a single line of text. It's actually one of the more difficult widgets to implement due to all the user interaction features. Text is entered from the keyboard and may be selected with arrow keys or by clicking the mouse. A range of text can be selected by clicking and dragging the mouse, or by pressing an arrow key while the shift key is pressed.
     

     
    I had to implement an additional keyboard event. KeyDown and KeyEvents work for all keys, but KeyChar events are called when typing results in an actual character. The ASCII code of the typed character is sent in the data parameter of the event function:

    function Script:KeyChar( charcode ) end
     
    Making the caret indicator flash on an off goes against the event-driven nature of this system, but I think it's an important visual indicator and I wanted to include it. I went through a few ideas including a really over-engineered timer system. Finally I just decided to make the GUI call a function on the focused widget every 500 milliseconds (if the function is present in the widget's script):

    --Blink the caret cursor on and off function Script:CursorBlink() if self.cursorblinkmode == nil then self.cursorblinkmode = false end self.cursorblinkmode = not self.cursorblinkmode self.widget:Redraw() end
     
    All in all, the script weighs in at 270 lines of code. It does not handle cut, copy, and paste yet, and double-clicking to select the entire text does not yet consider spaces in the clicked word. The drawing function is actually quite simple, so you could easily skin this to get a different appearance and keep the same behavior.
     

    Script.caretposition=0 Script.sellen=0 Script.doubleclickrange = 1 Script.doubleclicktime = 500 function Script:Draw(x,y,width,height) local gui = self.widget:GetGUI() local pos = self.widget:GetPosition(true) local sz = self.widget:GetSize(true) local scale = gui:GetScale() local item = self.widget:GetSelectedItem() local text = self.widget:GetText() --Draw the widget background gui:SetColor(0.2,0.2,0.2) gui:DrawRect(pos.x,pos.y,sz.width,sz.height,0) --Draw the widget outline if self.hovered==true then gui:SetColor(51/255/4,151/255/4,1/4) else gui:SetColor(0,0,0) end gui:DrawRect(pos.x,pos.y,sz.width,sz.height,1) --Draw text selection background if self.sellen~=0 then local n local w local x = gui:GetScale()*8 local px = x local c1 = math.min(self.caretposition,self.caretposition+self.sellen) local c2 = math.max(self.caretposition,self.caretposition+self.sellen) for n=0,c2-1 do if n==c1 then px = x end c = String:Mid(text,n,1) x = x + gui:GetTextWidth(c) if n==c2-1 then w = x-px end end gui:SetColor(0.4,0.4,0.4) gui:DrawRect(pos.x + px,pos.y+2*scale,w,sz.height-4*scale,0) end --Draw text gui:SetColor(0.75,0.75,0.75) if text~="" then gui:DrawText(text,scale*8+pos.x,pos.y,sz.width,sz.height,Text.Left+Text.VCenter) end --Draw the caret if self.cursorblinkmode then if self.focused then local x = self:GetCaretCoord(text) gui:DrawLine(scale*8+pos.x + x,pos.y+2*scale,scale*8+pos.x + x,pos.y + sz.height-4*scale) end end end --Find the character position for the given x coordinate function Script:GetCharAtPosition(pos) local text = self.widget:GetText() local gui = self.widget:GetGUI() local n local c local x = gui:GetScale()*8 local count = String:Length(text) local lastcharwidth=0 for n=0,count-1 do c = String:Mid(text,n,1) lastcharwidth = gui:GetTextWidth(c) if x >= pos - lastcharwidth/2 then return n end x = x + lastcharwidth end return count end --Get the x coordinate of the current caret position function Script:GetCaretCoord() local text = self.widget:GetText() local gui = self.widget:GetGUI() local n local c local x=0 local count = math.min(self.caretposition-1,(String:Length(text)-1)) for n=0,count do c = String:Mid(text,n,1) x = x + gui:GetTextWidth(c) end return x end --Blink the caret cursor on and off function Script:CursorBlink() if self.cursorblinkmode == nil then self.cursorblinkmode = false end self.cursorblinkmode = not self.cursorblinkmode self.widget:Redraw() end function Script:MouseDown(button,x,y) self.focused=true if button==Mouse.Left then --Detect double-click and select entire text local currenttime = Time:Millisecs() if self.lastmousehittime~=nil then if math.abs(self.lastmouseposition.x-x)<=self.doubleclickrange and math.abs(self.lastmouseposition.y-y)<=self.doubleclickrange then if currenttime - self.lastmousehittime < self.doubleclicktime then self.lastmousehittime = currenttime local l = String:Length(self.widget:GetText()) self.caretposition = l self.sellen = -l self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.pressed=false self.widget:Redraw() return end end end self.lastmouseposition = {} self.lastmouseposition.x = x self.lastmouseposition.y = y self.lastmousehittime = currenttime --Position caret under mouse click self.cursorblinkmode=true self.caretposition = self:GetCharAtPosition(x) self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.pressed=true self.sellen=0 self.widget:Redraw() end end function Script:MouseUp(button,x,y) if button==Mouse.Left then self.pressed=false end end function Script:MouseMove(x,y) if self.pressed then --Select range of characters local currentcaretpos = self.caretposition local prevcaretpos = self.caretposition + self.sellen self.cursorblinkmode=true self.caretposition = self:GetCharAtPosition(x) if self.caretposition ~= currentcaretpos then self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.sellen = prevcaretpos - self.caretposition self.widget:Redraw() end end end function Script:LoseFocus() self.focused=false self.widget:Redraw() end function Script:MouseEnter(x,y) self.hovered = true self.widget:Redraw() end function Script:MouseLeave(x,y) self.hovered = false self.widget:Redraw() end function Script:KeyUp(keycode) if keycode==Key.Shift then self.shiftpressed=false end end function Script:KeyDown(keycode) if keycode==Key.Shift then self.shiftpressed=true end if keycode==Key.Up or keycode==Key.Left then --Move the caret one character left local text = self.widget:GetText() if self.caretposition>0 then self.caretposition = self.caretposition - 1 self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true if self.shiftpressed then self.sellen = self.sellen + 1 else self.sellen = 0 end self.widget:Redraw() end elseif keycode==Key.Down or keycode==Key.Right then --Move the caret one character right local text = self.widget:GetText() if self.caretposition<String:Length(text) then self.caretposition = self.caretposition + 1 self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true if self.shiftpressed then self.sellen = self.sellen - 1 else self.sellen = 0 end self.widget:Redraw() end end end function Script:KeyChar(charcode) local s = self.widget:GetText() local c = String:Chr(charcode) if c=="\b" then --Backspace if String:Length(s)>0 then if self.sellen==0 then if self.caretposition==String:Length(s) then s = String:Left(s,String:Length(s)-1) elseif self.caretposition>0 then s = String:Left(s,self.caretposition-1)..String:Right(s,String:Length(s)-self.caretposition) end self.caretposition = self.caretposition - 1 self.caretposition = math.max(0,self.caretposition) else local c1 = math.min(self.caretposition,self.caretposition+self.sellen) local c2 = math.max(self.caretposition,self.caretposition+self.sellen) s = String:Left(s,c1)..String:Right(s,String:Length(s) - c2) self.caretposition = c1 self.sellen = 0 end self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.widget:SetText(s) EventQueue:Emit(Event.WidgetAction,self.widget) end elseif c~="\r" and c~="" then --Insert a new character local c1 = math.min(self.caretposition,self.caretposition+self.sellen) local c2 = math.max(self.caretposition,self.caretposition+self.sellen) s = String:Left(s,c1)..c..String:Right(s,String:Length(s) - c2) self.caretposition = self.caretposition + 1 if self.sellen<0 then self.caretposition = self.caretposition + self.sellen end self.sellen=0 self.widget:GetGUI():ResetCursorBlink() self.cursorblinkmode=true self.widget:SetText(s) EventQueue:Emit(Event.WidgetAction,self.widget) end end
×
×
  • Create New...