Jump to content

Josh

Staff
  • Posts

    23,507
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    One of the best points of Vulkan is how shaders are loaded from precompiled Spir-V files. This means GLSL shaders either work or they don't. Unlike OpenGL, there is no different outcome on Intel, AMD, or nVidia hardware. SPIR-V files can be compiled using a couple of different utilities. I favor LunarG's compiler because it supports #include directives.
    Shader.vert:
    #version 450 #extension GL_ARB_separate_shader_objects : enable #include "VertexLayout.glsl" layout(push_constant) uniform pushBlock { vec4 materialcolor; } pushConstantsBlock; layout(location = 0) out vec3 fragColor; void main() { gl_Position = vec4(inPosition.xy, 0.0, 1.0); fragColor = inColor.rgb * materialcolor.rgb; } VertexLayout.glsl:
    layout(location = 0) in vec3 inPosition; layout(location = 1) in vec3 inNormal; layout(location = 2) in vec2 inTexCoords0; layout(location = 3) in vec2 inTexCoords1; layout(location = 4) in vec3 inTangent; layout(location = 5) in vec4 inColor; layout(location = 6) in vec4 inBoneWeights; layout(location = 7) in uvec4 inBoneIndices; If the shader compiles successfully, then you don't have to worry about whether it works on different manufacturers' hardware. It just works. So if someone writes a new post-processing effect they don't need to test on other hardware or worry about people asking for help when it doesn't work. Because it always works the same.
    You can try it yourself with these files:
    Shaders.zip
  2. Josh
    In Vulkan all shader uniforms are packed into a single structure declared in a GLSL shader like this:
    layout(push_constant) uniform pushBlock { vec4 color; } pushConstantsBlock; You can add more values, but the shaders all need to use the same structure, and it needs to be declared exactly the same inside the program.
    Like everything else in Vulkan, shaders are set inside a command buffer. But these shader values are likely to be constantly changing each frame, so how do you handle this? The answer is to have a pool of command buffers and retrieve an available one when needed to perform this operation.
    void Vk::SetShaderGlobals(const VkShaderGlobals& shaderglobals) { VkCommandBuffer commandbuffer; VkFence fence; commandbuffermanager->GetManagedCommandBuffer(commandbuffer,fence); VkCommandBufferBeginInfo beginInfo = {}; beginInfo.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO; beginInfo.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT; vkBeginCommandBuffer(commandbuffer, &beginInfo); vkCmdPushConstants(commandbuffer, pipelineLayout, VK_SHADER_STAGE_ALL, 0, sizeof(shaderglobals), &shaderglobals); vkEndCommandBuffer(commandbuffer); VkSubmitInfo submitInfo = {}; submitInfo.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO; submitInfo.commandBufferCount = 1; submitInfo.pCommandBuffers = &commandbuffer; vkQueueSubmit(devicequeue[0], 1, &submitInfo, fence); } I now have a rectangle that flashes on and off based on the current time, which is fed in through a shader uniform structure. Now at 1500 lines of code.
    You can download my command buffer manager code at the Leadwerks Github page:
  3. Josh
    I am surprised at how quickly Vulkan development is coming together. The API is ridiculously verbose, but at the same time it eliminates a lot of hidden states and implicit behavior that made OpenGL difficult to work with. I have vertex buffers working now. Vertices in the new engine will always use this layout:
        struct VkVertex     {         float position[3];         float normal[3];         float texcoords0[2];         float texcoords1[2];         float tangent[3];         unsigned char color[4];         unsigned char boneweights[4];         unsigned char boneindices[4];     }; Note there are no longer vertex binormals, as these are calculated in the vertex shader, with the assumption that the texture coordinates have no shearing. There are two sets of UV coordinates available to use. Up to 256 bones per mesh are supported.
    I am creating a few internal classes for Vulkan, out of necessity, and the structure of the new renderer is forming. It's very interesting stuff:
    class VkMesh { public: Vk* environment; VkBuffer vertexBuffer; VmaAllocation allocation; VkBuffer indexBuffer; VmaAllocation indexallocation; VkMesh(); ~VkMesh(); }; I have hit the memory management part of Vulkan. Something that used to be neatly done for you is now pushed onto the developer for no apparent reason. I think this is really pointless because we're all going to end up using a bunch of open-source helper libraries anyways. It's like they are partially open-sourcing the driver.

    You can't just allocate memory buffers as you wish. From vulkan-tutorial.com:
    Nvidia explains it visually. It is better to allocate a smaller number of memory blocks and buffers and split them up:

    I added the Vulkan Memory Allocator library and it works. I honestly have no idea what it is doing, but I am able to delete the Vulkan instance with no errors so that's good.
    Shared contexts are also working so we can have multiple Windows, just like in the OpenGL renderer:

  4. Josh
    When a window in Vulkan resizes you have to manually delete the about a dozen objects and then recreate them with the new size. It's unbelievably complicated. They've pushed all the driver complexity onto the application, in an effort to simplify the job of writing drivers. I can see the advantage to this, because OpenGL drivers in the past were always inconsistent, but it is still shocking how many little details they expose in Vulkan. Just resizing a window and swapping the screen buffer involves fences, semaphores, and all kinds of issues.
    So it is with great pride that I present my RESIZABLE VULKAN WINDOW OF AWESOME POWER!
    vktest2.zip
    Now I will drink CHOSEN ONE ENERGY DRINK!

    In spite of its issues, I kind of like Vulkan, and I don't think it will take long to get the renderer running completely on this new API. I guess Nvidia's raytracing extensions can be used with VK pretty easily too, so that will be interesting in the future.
    The next step is to get multiple shared contexts working.
  5. Josh
    Two days and 823 lines of code later, I present to you the Vulkan triangle of awesomeness, running in our engine:

    Here are my thoughts on Vulkan:
    It's ridiculously verbose. You have to specify every little detail of the rasterizer, there's a million classes to create, and every little variable has to be exactly right. There's really no reason for this because 90% of the code is just something you copy and paste.
    Shaders can use GLSL, which seems very weird, but it makes things easier for us. The GLSL validate tool is used to precompile shaders into SPIR-V code, which works across all hardware, which is very nice. This means that shader code can finally be made closed-source, if you want to. Shaders are compiled into binary .spv files.
    After walking through the steps of setting rendering up in Vulkan, it is more clear to me what is going on under the hood inside the graphics drivers for OpenGL.
    I tried running the app on both Nvidia and Intel graphics, and got the same exact results each time. Vulkan should provide more consistent results across different vendor hardware, and might not require cross-hardware testing. That's a big plus.
    Khronos could have easily supplied a middle layer of C++ code to standardize this. The fact you have to look up all this stuff on a bunch of github repositories and third-party tutorials is awful. It pretty much guarantees that everyone who uses Vulkan is going to do it through a commercial game engine like ours.
    You can download my test application here:
    vktest.zip
    The application will display a lot of errors when you close the window because resources are not being freed correctly yet. The program will probably crash if you resize or maximize the window because this has not been accounted for yet. I think it is still a big challenge to build a full Vulkan renderer that was doing everything our OpenGL renderer was doing, but the fact that we can reuse all our GLSL code helps a lot. Stay tuned!
  6. Josh
    The latest design of my OpenGL renderer using bindless textures has some problems, and although these can be resolved, I think I have hit the limit on how useful an initial OpenGL implementation will be for the new engine. I decided it was time to dive into the Vulkan API. This is sort of scary, because I feel like it sets me back quite a lot, but at the same time the work I do with this will carry forward much better. A Vulkan-based renderer can run on Windows, Linux, Mac, iOS, Android, PS4, and Nintendo Switch.
    So far my impressions of the API are pretty good. Although it is very verbose, it gives you a lot of control over things that were previously undefined or vendor-specific hacks. Below is code that initializes Vulkan and chooses a rendering device, with a preference for discrete GPUs over integrated graphics.
    VkInstance inst; VkResult res; VkDevice device; VkApplicationInfo appInfo = {}; appInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO; appInfo.pApplicationName = "MyGame"; appInfo.applicationVersion = VK_MAKE_VERSION(1, 0, 0); appInfo.pEngineName = "TurboEngine"; appInfo.engineVersion = VK_MAKE_VERSION(1, 0, 0); appInfo.apiVersion = VK_API_VERSION_1_0; // Get extensions uint32_t extensionCount = 0; vkEnumerateInstanceExtensionProperties(nullptr, &extensionCount, nullptr); std::vector<VkExtensionProperties> availableExtensions(extensionCount); vkEnumerateInstanceExtensionProperties(nullptr, &extensionCount, availableExtensions.data()); std::vector<const char*> extensions; VkInstanceCreateInfo createInfo = {}; createInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO; createInfo.pApplicationInfo = &appInfo; createInfo.enabledExtensionCount = (uint32_t)extensions.size(); createInfo.ppEnabledExtensionNames = extensions.data(); #ifdef DEBUG createInfo.enabledLayerCount = 1; const char* DEBUG_LAYER = "VK_LAYER_LUNARG_standard_validation"; createInfo.ppEnabledLayerNames = &DEBUG_LAYER; #endif res = vkCreateInstance(&createInfo, NULL, &inst); if (res == VK_ERROR_INCOMPATIBLE_DRIVER) { std::cout << "cannot find a compatible Vulkan ICD\n"; exit(-1); } else if (res) { std::cout << "unknown error\n"; exit(-1); } //Enumerate devices uint32_t gpu_count = 1; std::vector<VkPhysicalDevice> devices; res = vkEnumeratePhysicalDevices(inst, &gpu_count, NULL); if (gpu_count > 0) { devices.resize(gpu_count); res = vkEnumeratePhysicalDevices(inst, &gpu_count, &devices[0]); assert(!res && gpu_count >= 1); } //Sort list with discrete GPUs at the beginning std::vector<VkPhysicalDevice> sorteddevices; for (int n = 0; n < devices.size(); n++) { VkPhysicalDeviceProperties deviceprops = VkPhysicalDeviceProperties{}; vkGetPhysicalDeviceProperties(devices[n], &deviceprops); if (deviceprops.deviceType == VkPhysicalDeviceType::VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU) { sorteddevices.insert(sorteddevices.begin(),devices[n]); } else { sorteddevices.push_back(devices[n]); } } devices = sorteddevices; VkDeviceQueueCreateInfo queue_info = {}; unsigned int queue_family_count; for (int n = 0; n < devices.size(); ++n) { vkGetPhysicalDeviceQueueFamilyProperties(devices[n], &queue_family_count, NULL); if (queue_family_count >= 1) { std::vector<VkQueueFamilyProperties> queue_props; queue_props.resize(queue_family_count); vkGetPhysicalDeviceQueueFamilyProperties(devices[n], &queue_family_count, queue_props.data()); if (queue_family_count >= 1) { bool found = false; for (int i = 0; i < queue_family_count; i++) { if (queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT) { queue_info.queueFamilyIndex = i; found = true; break; } } if (!found) continue; float queue_priorities[1] = { 0.0 }; queue_info.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO; queue_info.pNext = NULL; queue_info.queueCount = 1; queue_info.pQueuePriorities = queue_priorities; VkDeviceCreateInfo device_info = {}; device_info.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO; device_info.pNext = NULL; device_info.queueCreateInfoCount = 1; device_info.pQueueCreateInfos = &queue_info; device_info.enabledExtensionCount = 0; device_info.ppEnabledExtensionNames = NULL; device_info.enabledLayerCount = 0; device_info.ppEnabledLayerNames = NULL; device_info.pEnabledFeatures = NULL; res = vkCreateDevice(devices[n], &device_info, NULL, &device); if (res == VK_SUCCESS) { VkPhysicalDeviceProperties deviceprops = VkPhysicalDeviceProperties{}; vkGetPhysicalDeviceProperties(devices[n], &deviceprops); std::cout << deviceprops.deviceName; vkDestroyDevice(device, NULL); break; } } } } vkDestroyInstance(inst, NULL);  
  7. Josh
    My last NASA project is complete. There's a physics bug in Leadwerks 4.6 that will get resolved this weekend. Starting Monday I am going to focus on the new engine again and move us forward so we can release in 2020. I am really looking forward to getting back in the game.
  8. Josh
    The new game engine needs to roll out with some top-notch examples showing off what it can do. Here's what I want:
    First-person shooter Offroad racing game Space shoot-em-up side-scroller. Side-scoller platformer similar to the Contra Playstation game. Now what I can use your help with is finding good example games on YouTube or Steam that I can start designing these samples around. Post your ideas below!
  9. Josh
    This is something I typed up for some colleagues and I thought it might be useful info for C++ programmers.
    To create an object:
    shared_ptr<TypeID> type = make_shared<TypeID>(constructor args…) This is pretty verbose, so I always do this:
    auto type = make_shared<TypeID>(constructor args…) When all references to the shared pointer are gone, the object is instantly deleted. There’s no garbage collection pauses, and deletion is always instant:
    auto thing = make_shared<Thing>(); auto second_ref = thing; thing = NULL; second_ref = NULL;//poof! Shared pointers are fast and thread-safe. (Don’t ask me how.)
    To get a shared pointer within an object’s method, you need to derive the class from “enable_shared_from_this<SharedObject>”. (You can inherit a class from multiple types, remember):
    class SharedObject : public enable_shared_from_this<SharedObject> And you can implement a Self() method like so, if you want:
    shared_ptr<SharedObject> SharedObject::Self() { return shared_from_this(); } Casting a type is done like this:
    auto bird = dynamic_pointer_cast<Bird>(animal); Dynamic pointer casts will return NULL if the animal is not a bird. Static pointer casts don’t have any checks and are a little faster I guess, but there’s no reason to ever use them.
    You cannot call shared_from_this() in the constructor, because the shared pointer does not exist yet, and you cannot call it in the destructor, because the shared pointer is already gone!
    Weak pointers can be used to store a value, but will not prevent the object from being deleted:
    auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //creates a new shared pointer to “thing” auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; thing = NULL; shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //returns NULL! If you want to set a weak pointer’s value to NULL without the object going out of scope, just call reset():
    auto thing = make_shared<Thing>(); weak_ptr<Thing> thingwptr = thing; thingwptr.reset(); shared_ptr<Thing> another_ref_to_thing = thingwptr.lock(); //returns NULL! Because no garbage collection is used, circular references can occur, but they are rare:
    auto son = make_shared<Child>(); auto daughter = make_shared<Child>(); son->sister = daughter; daughter->brother = son; son = NULL; daughter = NULL;//nothing is deleted! The problem above can be solved by making the sister and brother members weak pointers instead of shared pointers, thus removing the circular references.
    That’s all you need to know!
  10. Josh
    An update is available on the beta branch on Steam with a few bug fixes.
    I'm going to release 4.6 with the current features because a lot of bugs have been fixed since 4.5 and we're overdue for an official release. 4.7 will add a new vehicle system, character crouching physics, and some other things, and will be out later this year.
  11. Josh
    I have not gone in several years because everything we were doing revolved around Steam, and it just didn't seem very important. But this year I had some business to attend to so I spent the last three days in San Francisco.
    I still have a lot of friends in the game industry, and the reaction to my plans for the new engine was very positive. A few years ago people would have groaned at the idea of another engine, but it seems they are now bored with technology and very open to something new. The angle we are taking plays well to the audience. Basically all my predictions about how to sell a new game engine in 2020 were confirmed.
    We do need to have tip-top examples to show off next year, and that starts with good artwork. At this point I am probably only planning to show a first-person shooter, an offroad racing game, and then whatever the NASA team comes up with. And in order to get that done in time, I need to start planning, now!
    I feel very focused on what needs to happen, what is important, and what is not. My idea of what I want is so clear. It doesn't seem that hard to complete.
  12. Josh
    A new update is available on the beta branch on Steam. This adds numerous bug fixes. The Linux build of the editor is compiled with Ubuntu 16.04 and the engine libraries and executables are compiled with Ubuntu 18.04. Linux users, please let me know how this works for you.
  13. Josh
    A new update is available for Leadwerks Game Engine 4.6 beta. This fixes many bugs.
    Slow kinematic joint rotation Heightmap import flipped Map switching crashes in VR Project Manager cancel button bug The Zone DLC map failing to load Ball joints not working Take a look at the bug reports forum to see all the recent fixes in the engine and documentation.
    This is a full update, with new builds for Windows and Linux, for C++ and Lua games. You can opt into the beta branch on Steam to get the update.
  14. Josh
    I'm in DC this week helping the folks at NASA wrap up some projects. I'm going to move back to a supportive role and focus on development of Leadwerks 4.6 and the new engine, and I am helping them to hire some programmers to replace me. We found some very talented people who I am confident will do a fantastic job, and I can't wait to see what they create using Leadwerks Game Engine.
    I helped a team using Leadwerks at NASA get through some big milestones and expand. I hope that someday soon we will be able to tell the story of what happened, because it really has been an amazing experience with some really awesome people. I think it would make a nice movie.
    I will start working hard on Leadwerks 4.6 next Monday., and during March I will finish up my last projects so I can devote all my time to Leadwerks. I'm actually really looking forward to long days of doing nothing but engine coding. We're also going to hold a game tournament this Summer, with prizes. It was worth spending time to help NASA because we picked up a lot of new business customers and it helped focus development of the new engine in a direction everyone seems very happy about, but I need to get back to doing what I love best, which is building great software for you.
  15. Josh
    Previously I talked about array textures acting as "bindless" textures, but there is an actual OpenGL extension that allows a shader to access any texture without the stupid texture binding / slot convention that limits OpenGL 4.0 shaders to a minimum of 16 textures. Implemenation was surprisingly easy, although Mac hardware apparently does not support this extension. When combined with the multi-draw commands in OpenGL 4.3, and some other tricks, it is possible to render multiple sets of objects in one single draw call. Below you can see six instances of three different objects, with different materials applied to them, all rendered in one single command, for ultimate performance.
    This is basically the final goal of the whole crazy architecture I've been working on for over a year.
    I will test this a bit more, release an update for the new engine beta, and then at that point I think it will be time to move everything over to Vulkan / Metal.

  16. Josh
    The clustered forward renderer in Leadwerks 5 / Turbo Game Engine required me to implement a texture array to store all shadow maps in. Since all shadow maps are packed into a single 3D texture, the shader can access all required textures outside of the number of available texture units, which only gives 16 guaranteed slots.
    I realized I could use this same technique to pack all scene textures into a few arrays and completely eliminate the overhead of binding different textures. In order to do this I had to introduce some restrictions. The max texture size, by default, is 4096x4096. Only two texture formats, RGBA and DXT5, are supported. Other texture formats will be converted to RGBA during loading. If a texture is smaller than 1024x1024, it will still take up a layer in the 1024x1024 texture array.
    This also makes texture lookups in shaders quite a bit more complicated.
    Before:
    fragColor *= texture(texture0,texcoords0); After:
    fragColor *= texture(textureatlases[materialdefs[materialid].textureslot[0]],vec3(texcoords0,materialdefs[materialid].texturelayer[0])); However, making shaders easy to read is not a priority in my design. Performance is. When you have one overarching design goal these decisions are easy to make.
    Materials can now access up to 256 textures. Or however many I decide to allow.
    The real reason for this is it will help support my goal to render the entire scene with all objects in just one or a few passes, thereby completely eliminating all the overhead of the CPU/GPU interaction to reach 100% GPU utilization, for ultra maximum performance, to eliminate VR nausea once and for all.
    Also, I went out walking yesterday and randomly found this item in a small shop.

    There's something bigger at work here:
    Most of my money comes through Steam. Valve invented the technology the HTC Vive is based on. Gabe Newell gave me the Gigabyte Brix mini PC that the new engine architecture was invented on. The new engine makes VR performance 10x faster. If this isn't a clear sign that divine providence is on our side, I don't know what is.
  17. Josh
    Leadwerks 5 / Turbo makes extensive use of multithreading. Consequently, the API is stateless and more explicit. There is no such thing as a "current" world or context. Instead, you explicitly pass these variables to the appropriate commands.
    One interesting aspect of this design is code like that below works perfectly fine. See if you can work through it and understand what's going on:
    int main(int argc, const char *argv[]) { //Create a model ;) auto box = CreateBox(nullptr); //Create the world auto world = CreateWorld(); //Create a camera auto camera = CreateCamera(world); camera->Move(0,0,-5); //Create an instance of the model in the new world auto model = box->Instance(world); //Create a window auto window = CreateWindow(); //Create a rendering context auto context = CreateContext(window); while (not window->Closed()) { if (window->KeyDown(KEY_ESCAPE)) window->Close(); world->Update(); world->Render(context); } return 0; }  
  18. Josh
    It turns out GLTF is actually three different file formats. ? Textures can be loaded from external files, embedded in a binary .glb file, but they can also be saved in an ASCII GLTF files using base64 encoding. Having three different ways to store textures is not a good design decision, but at least it's better than the disaster called Collada. (Note to Khronos: If your file format specification has more pages than a Tom Clancy novel it probably sucks.)
    Our GLTF loader now supports files with textures embedded in the file, in both binary (.glb) and embedded formats. We also support the Microsoft DDS extension, and support for the Microsoft LOD extension is on the way. The new editor will be able to save loaded models as GLTF files, and we can pack our own information away in our own file extensions, while keeping the file loadable by other applications.

    Additionally, the new engine now supports textures loaded from DDS, PNG, JPG, BMP, PCX, PSD, GIF, ICO, TIF, EXR, and HDR formats, as well as Leadwerks TEX files. I'd like to get a new update out soon for the new engine, and then continue working on bug fixes for Leadwerks Game Engine 4.6.

  19. Josh
    Some of the Leadwerks Game Engine design was originally developed to run on PC and mobile. In order to supported multiple renderers (OpenGL and OpenGLES) I implemented a system that uses an abstract base class with an API-specific class derived from that:
    Texture OpenGLTexture All OpenGL code was contained in the OpenGLTexture class. This worked fine, and theoretically it would have allowed us to support multiple renderers within one build, like OpenGL and DirectX. In practice it's a feature that was never used, and created a lot of complicate class hierarchies, with functionality split between the base and derived classes.
    In the new engine, all rendering code is completely separated in a separate thread, and we have a separate class that is a stripped-down representation of the object the programmer interfaces with:
    Texture RenderTexture When the programmer calls a command that makes a change to the Texture object, an instruction is added to a queue of commands that is sent to the rendering thread, and their change is also made on that RenderTexture object, although not instantaneously.
    Right now I am stripping out the derived classes and turning classes that were previously abstract into full classes. It's quite a big job to restructure a complex program like this but it needs to be done. Even when we switch over to Vulkan / Metal I don't see us every supporting multiple APIs within a single build, and I am glad to get rid of this aspect of the engine.
    I'm also doing the same thing for physics. An Entity object in the main thread can have a PhysicsNode object that lives in the physics thread. However, this does not get created unless there is some physics command performed on the entity, like setting the mass or adding a collision shape.
    Other stuff I want to change:
    Get rid of GetClass() / GetClassName() method. Get rid of Object::ModelClass etc. constants. Replace all static class constants with global variables, i.e. WINDOW_FULLSCREEN instead of Window::Fullscreen. It's actually best to declare constants with enum because then they get evaluated as a constant and can be used in array declarations and switch statements. It only needs to be declared once, in the header, but unlike a macro it stays contained within the namespace it is declared in:
    enum { MAX_PHYSICS_THREADS = 32 }; The next step is to create a usable programming SDK with models, lights, scene loading, scripting, and physics. This will allow beta testers to actually start developing games. The lack of a visual editor is a challenge, but at the same time we are now using more standard file formats like DDS and GLTF, which gives us better consistency with the output of various modeling programs. I'd like to start looking at a Lua debugger for Visual Studio Code soon. There seems to be some debuggers out there already, but I have no idea how the communication between the debugger and the game is supposed to work. I invented my own network data protocol in Leadwerks and there isn't any standard I am aware of.
    2D graphics in the new engine are quite different from in Leadwerks, which used drawing commands to control what gets displayed on screen. Since the rendering all occurs asynchronously on another thread, this approach does not make sense at all. I also had a problem with the GUI in this design. The GUI system uses a script with a drawing command to redraw each widget, but we don't want any Lua code running in the rendering thread.
    The solution is to make 2D graphical elements persistent objects:
    auto window = CreateWindow(); auto context = CreateContext(window); auto world = CreateWorld(); //Create some 2D graphics auto rect = CreateRect(context,10,10,200,75,true); rect->SetColor(0,0,1); auto line = CreateLine(context,10,10,200,200); line->SetColor(1,0,0); auto text = CreateText(context,"Hello!",0,0,200,75,TEXT_CENTER); text->SetPosition(10,10) text->SetFont(LoadFont("Fonts/arial.ttf",18); while (not window->Closed()) { world->Render(context); } Just like with an entity, you can set the variable to null to stop drawing the element.
    while (not window->Closed()) { if (window->KeyHit(KEY_SPACE)) rect = nullptr; world->Render(context); } 2D elements can have a hierarchy, so you can create one element that gets drawn on top of another:
    auto rect = CreateRect(context,10,10,200,75,true); rect->SetColor(0,0,1); auto rect2 = CreateRect(rect,4,4,rect.size.x-8,rect.size.y-8,true); rect2->SetColor(1,1,1); We need the GUI working for some VR projects I want to use the new engine in soon. Once the items above are all working, that will give us everything we need to start working on the new editor.
  20. Josh
    Since the GLTF file format can pack textures into a single file with the model, I needed to implement asset loading directly from a stream:
    auto stream = ReadFile("image.png"); auto tex = LoadTexture(stream); This was interesting because I needed to add a check for each supported image type so the loader can determine the file type from the contents instead of the file path extension. Most file formats include a string or "magic number" at the beginning of the file format to indicate what type of file it is:
    //BMP check pos = stream->GetPos(); if (stream->GetSize() - pos >= 2) { if (stream->ReadString(2) == "BM") isbmp = true; } stream->Seek(pos); The TGA file format is weird though because it does not have one of these. It just launches straight into a header of information, already assuming the file is a TGA file. So what you have to do is read some of the values and see if they are reasonable. With a little help from the BlitzMax source code, I was able to do this:
    //TGA check pos = stream->GetPos(); tgahdr hdr; if (stream->GetSize() - pos >= sizeof(hdr)) { const int TGA_NULL = 0; const int TGA_MAP = 1; const int TGA_RGB = 2; const int TGA_MONO = 3; const int TGA_RLEMAP = 9; const int TGA_RLERGB = 10; const int TGA_RLEMONO = 11; const int TGA_COMPMAP = 32; const int TGA_COMPMAP4 = 33; stream->Read(&hdr, sizeof(hdr)); if (hdr.colourmaptype == 0) { if (hdr.imgtype == TGA_MAP or hdr.imgtype == TGA_RGB or hdr.imgtype == TGA_RLERGB) { if (hdr.psize == 15 or hdr.psize == 16 or hdr.psize == 24 or hdr.psize == 32) { if (hdr.width > 0 and hdr.width <= 163284 * 2) { if (hdr.height > 0 and hdr.height <= 163284 * 2) istga = true; } } } } } stream->Seek(pos); In fact the whole idea of having a list of loaders that read the file contents to determine if they are able to load the file is an idea I pulled from the design of BlitzMax. It is strange that so many good tech products have fallen away yet we are growing.
  21. Josh
    I'm now able to load materials from GLTF files. These can use external textures or they can use textures packed into a GLTF binary file. Because we have a standardized material specification, this means you can download GLTF files from SketchFab or Turbosquid, and your model materials will automatically be loaded, all the time. There's no more generating materials or messing around trying to figure out which texture is the normal or specular map. An extension exists for DDS texture support, fortunately.
    Here are the preliminary results.
     
  22. Josh
    So, most of December was eaten up on some NASA VR projects. There was a conference last week in Seattle that I attended for a couple of days. Then I had meetings in northern California and Arizona.
    Unfortunately, I can't really talk much about what I am doing with those. Rest assured I am working on a plan to grow the company so we can provide better products and support for you. I'm taking a hit on productivity now in order to make a bigger plan happen.
    Today is my first day back home after all that, and I now have time to focus on the software. Thanks for your patience while I get this all sorted out.
  23. Josh
    I realized there are two main ways a plugin is going to be written, either as a Lua script or as a DLL. So I started experimenting with making a JSON file that holds the plugin info and tells the engine where to load it from:
    { "plugin": { "title": "Game Analytics", "description": "Add analytics to your game. Visit www.gameanalytics.com to create your free account.", "author": "© Leadwerks Software. All Rights Reserved.", "url": "https://www.turboengine.com", "library": "GameAnalytics.dll" } } { "plugin": { "title": "Flip Normals", "description": "Reverse faces of model in Model Editor.", "author": "© Leadwerks Software. All Rights Reserved.", "url": "https://www.turboengine.com", "scriptfile": "FlipNormals.lua" } } I like this because the plugin info can be loaded and displayed in the editor without actually loading the plugin.
    I also wanted to try using a JSON file to control script properties. For example, this file "SlidingDoor.json" goes in the same folder as the script and contains all the properties the engine will create when the script is attached to an entity:
    { "script": { "properties": { "enabled": { "label": "Enabled", "type": "boolean", "value": true, "description": "If disabled the door will not move until it is enabled." }, "distance": { "label": "Distance", "type": "float", "value": [1,0,0], "description": "Distance the door should move, in global space." }, "opensound": { "label": "Open Sound", "type": "sound", "value": null, "description": "Sound to play when door opens." }, "closedelay": { "label": "Close Delay", "type": "integer", "value": 2000, "minvalue": 0, "description": "The number of milliseconds a door will stay open before closing again. Set this to 0 to leave open." } } } } I like that it is absolutely explicit, and it allows for more information than the comments allow in Leadwerks 4. There is actually official tools for validating the data. The json data types map very closely to Lua. However, it is more typing than just quickly writing a line of Lua code.
    While we're at it, let's take a look at what a JSON-based scene file format might look like:
    { "scene": { "entities": [ { "type": "Model", "name": "main door", "id": "1c631222-0ec1-11e9-ab14-d663bd873d93", "path": "Models/Doors/door01.gltf", "position": [0,0,0], "euler": [90,0,0], "rotation": [0,0,0,1], "scale": [1,1,1], "matrix": [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]], "mass": 10, "color": [1,1,1,1], "static": false, "scripts": [ { "path": "Scripts/Objects/Doors/SlidingDoor.lua", "properties": { "distance": [1,0,0], "movespeed": 5 } }, { "path": "Scripts/Objects/Effects/Pulse.lua" } ] } ] } }  
×
×
  • Create New...