Jump to content

Crazycarpet

Members
  • Posts

    283
  • Joined

  • Last visited

Everything posted by Crazycarpet

  1. I get what you're saying, but the two physics libraries are very similar in how you implement them, switching the system out would be a rather simple process. Plus I would assume Josh uses some kind of higher level wrapper for the physics APIs that is used in both Turbo and Leadwerks. Changing one would like be nearly as simple as a copy and paste to the other with some minor changes. (That is a guess, I don't know how Turbo's physics were done.) If Newton does the job I'd say leave it... but it seems like it's causing headaches which with all the robust, free physics APIs out there these days there is no reason to go with a under-featured and under-documented one.
  2. I've been using Bullet in a lot of projects lately, it's really come a long way in the last couple of years. Very fast, more than accurate enough, fully-featured (unlike Newton), and open source... The documentation is also decent, especially stacked up beside Newtons. If you are planning to switch I'd imagine Bullet would be a pretty quick and painless switch. PhysX is good too but honestly, I prefer Bullet it has more options. Performance wise it leaves Newton in the dust, the only downside is the rigid body simulations may not be quite as stable but I'm sure they're more than good enough for Leadwerks' needs. Multi-threading physics simulations with Bullet is also very easy and the source code comes with tons of examples.
  3. Pretty sure Josh said he's not working on LE4's editor because it's not carrying over to LE5.
  4. The executable in your game, the exe file, is built from the C++ source code found in "My Documents/Leadwerks/Projects/<My Project>/Source/". As you know in C++ you have both header files, ".h", and cpp files, ".cpp". Header files are generally for declarations. (They are where you tell the compiler what functions, classes, methods you are defining, but generally do not assign them a body.) C++ files are where you either write "static", or local code to the cpp file.... and/or more commonly. To use these functions, classes, and methods that you declared in your header file in some C++ file you must first: (The static keyword has different implications when used in class declarations, a tutorial can explain this just don't be confused if you come across it.) #include "MyHeaderFile.h" Be careful when you're including header files in header files, as sometimes this is good practice (I'm not going to go into detail), often times it can lead to circular dependencies. Again, I'm not going to go into detail on this but look into forward declarations, and when they're allowed. Long story short, the next time you build your project in Visual Studio, or whatever IDE you're using, assuming you've actually used the code you've added somewhere and brought the file into source control the next time you build your program it will be compiled into the executable. Confused where to start? main.cpp contains the programs entry point, but Josh has projects setup so App.h/App.cpp provide you with a very straight forward entry point for your game. I would recommend looking into some C++ tutorials because at first writing it can be frustrating, but once you learn about all the tools the language provides it is one of the most powerful languages out there. Interested in communications between C++ and Lua? check out the Lua C api for tutorials on how to use the stack to communicate between languages, it can be confusing at first but I assure you it's quite simple. Shoot me a PM, or add me on Steam if you need a hand or some examples with this. I also made an interesting tool that can help you take advantage of ToLua++ to automatically expose your C++ classes, variables, and functions to Lua. Add me on Steam if you'd like help setting up this program. I'm sorry this post isn't very descriptive, I wanted to try to explain the process and the most common "gotchas" simply without getting into a potentially confusing discussion bout the language itself. Some debugging tips for those pesky linker errors. 1. Check for any declarations in header files that lack definitions. (Visual studio's intelli-sense will underline these declarations in green.) - A call to an undefined function will produce a linker error, though inteli-sense is usually good at telling you exactly what is wrong in this case. 2. Check for circular dependencies. 3. If you're using 3rd party libraries, check if you forgot to add the "lib" file to your project's "Linker--> Input" box in "Project Settings". Getting an error more-or-less explaining that a symbol is already defined in <>.obj? - Look into include guards!
  5. I'm too lazy to write the code, but you could do a pick downwards to see if there's 30ft of no collisions (or at least to confirm the actor isn't touching the ground), then set a variable in your character, 'falling', to true and store their current y position... In your UpdateWorld loop for this actor check if the player is 'falling', subtract the actors starting 'y' position from when they were marked as 'falling' from the current 'y' position and if that difference is 30 ft or over kill em. Edit: - You should code an IsOnGround() method for your actor, and use that to set 'falling' to false in the UpdateWorld loop if they are marked as 'faliling' but IsOnGround() is true.
  6. This is really impressive Josh, can't wait for the release. However still I feel like the instanced rendering is carrying here I'd love to see how much faster LE5 handles animated meshes than LE4. Perhaps a demo of this in the future? Also, I thought you said LE4 has frustum culling when I was complaining about GPU occlusion culling?
  7. The reason you'd want to multithread the command process is for situations where big, new, powerful GPUs are bored because the CPU's one thread can't send it commands fast enough to utilize it to the fullest extent. That's not a fair analogy so long as your GPU can handle it, why would you not want to throw more work at it? Modern GPUs (10 series, etc) can certainly handle it. A great GPU can handle anything a single core on your CPU can throw at it with ease, so you want to throw more at it. This is the most common bottleneck in games these days with how powerful GPUs are getting. The better your GPU, the better these optimizations will help, it's more planning for the future because as time goes on you'll see more and more improvements from this type of multi-threading, that's why DX12 and Vulkan moved towards it. Anyways like I said, it isn't usually necessary but it would be optimum, just food for thought so you consider this design if you move towards a Vulkan renderer. It'd be a shame to use Vulkan and just move all the rendering to a thread, instead of using all available threads for command buffer generation.
  8. Again, Doom doesn't do multi-threading... Why would it be faster than it's OpenGL renderer? They've had years to optimize OpenGL drivers, of course it'll be at least as fast in a single-threaded environment. It's not magic, it's physics at that point.... Vulkan can use multiple threads to generate command buffers, more at a time; OpenGL can only do 1 at a time. It would indisputably be faster that's just the reality of it. As time goes on and GPUs get more powerful a renderer in Vulkan that generates cmd buffers on multiple threads would be even faster because not only are you sending more work to the GPU due to the threaded command buffer generation, but the GPU would also be able to handle any work you throw at it. With high end cards today you will see big performance gains, where you wouldn't is with integrated cards... but that shouldn't be a priority. Furthermore in Vulkan you can physically send draw calls from multiple threads and they are not send to the main thread by the driver, this is one highlight of Vulkan that only DirectX 12 has. Metal is planning this too, I have not read whether or not this is already the case in Metal, of if it's just a future plan.
  9. Doom doesn't use a multi-threaded renderer. Of course Vulkan isn't going to magically make things faster on it's own, it gives you the ability to do it... On OpenGL you don't directly write to command buffers so you can't split the work up between threads. Vulkan in itself does not do anything multi-threading, this is something you have to implement. Vulkan just gives you the tools to design fast multi-threaded designs that were not possible prior to. I'm not saying this is necessary, your design will be great because the game loop does not have to wait for the renderer. I'm just saying with Vulkan you could get maximum performance, you could still keep the rendering separate of the game loop too then you would end up with both faster and independent rendering. Just spit-balling ideas because it sounds like you're trying to make LE as fast as possible, and this new API allows you to do what only DX12 could do without worrying about being locked to windows-only. This optimization would indisputably make LE's renderer way faster, which is perfect for VR. The only question is whether or not it is necessary, is LE fast enough without it in the situations it's designed for? No sense in writing a big complex renderer if the engine is fast enough as is. Edit: Also keep in mind that Nvidia's OpenGL drivers are extremely fast and complex, AMDs are not. On AMD cards Vulkan does "magically" make things faster just by implementing it because their driver team went above-and-beyond on their Vulkan drivers.
  10. The benefit to the multi-threaded APIs is that every thread has it's own command pool, and each thread can write to a command buffer so you can use any available threads to write to the command buffers. They are in the end submitted together, yes, but getting to the point where all command buffers are good-to-go is way faster. That's why they designed them this way. In the end, less time is spent waiting for 1 CPU thread to write all the command buffers. Nvidia has a great document about this: https://developer.nvidia.com/sites/default/files/akamai/gameworks/blog/munich/mschott_vulkan_multi_threading.pdf
  11. Very cool, but this is still more rendering separately on a thread than multi-threaded rendering. No matter how you cut it in GL the heavy work can't be spread across multiple threads so your GPU is always bored waiting for the under-used CPU to send it work, although in GL this is as good as it's going to get which is good enough. Still like your MoltenVK idea the best. Either way it is neat to be able to control the frame rate of physics and game logic separately of rendering.
  12. Can't wait to see what the future holds for Leadwerks. You will be able to make way better use of the CPU's threads with Vulkan so that'll be fun (if it happens). Don't forget to always use RenderDoc when you're changing up the renderer. Best tool ever made, I swear... although I'm sure you've used it already
  13. Nice, I guess it makes sense they both return a function, I did not ever think that they would work together. Good to know. If std::bind is returning a std::function<void()>, it is likely you don't need to wrap the lambda in a call to std::bind() either. Keep in mind std::bind is for class members, not lambdas.
  14. You should just use a lambda expression? Color4 color(255, 0, 0, 255); // I forgot what structure LE's colors are. AddInstruction( [this, color]() { this->color = color; } ); AddInstruction( [this]() { this->positiondata->Copy(); ); // Or whatever youre doing for the function.... It might pay off to just use lambdas and std::function. This is the most common practice for thread "job pools". It is likely to be more or less the exact same, they're both not pretty to read but using a lambda and it's capture list along with std::vector<std::function<void()>> to hold these functions gives you a lot more freedom in the long run. It is not going to limit you any more as in the lambda you can edit public member variables without a setter function, for private members you would still need a setter but that's obvious. For bind you need a setter regardless of if the member is public/private. I generally do a design similar to this: https://github.com/SaschaWillems/Vulkan/blob/master/base/threadpool.hpp Sascha released it under the MIT license, you should use it. It is more or less exactly what you need.
  15. Lol, CryEngine just has the right idea that if you're only going to offer one of the two, view frustum culling makes more sense. Perfect world? offer both it'd like 20 lines of code to add it. The difference is Unreal Engine provides view frustum culling so you get most the performance gains even with culling disabled. It is an option, not the only solution. I really hope LE users are planning to make a ghost game, or this isn't going to work for them.
  16. Euler angles are obtained from the Quaternion as such: Vector3 Quaternion::GetEulerAngles() const { float yy = y * y; float t0 = -2.f * (x * z + w * y); t0 = t0 > 1.f ? 1.f : t0; t0 = t0 < -1.f ? -1.f : t0; return Vector3( Math::ToDegrees(std::asinf(t0)), Math::ToDegrees(std::atan2(2.f * (x * y - w * z), -2.f * (yy + z * z) + 1.f)), Math::ToDegrees(std::atan2(2.f * (y * z - w * x), -2.f * (x * x + yy) + 1.f)) ); } So yeah, what Josh said. If you just simply incremented pitch, yaw, roll and that was it you'd experience gimble lock.
  17. I mean, it's not in Unity, UE4, or CryEngine. it seems like a big worry. I personally am not because I don't plan on making titles I want to sell with LE, but I feel it would dissuade many people from purchasing the engine... it's kind of a big issue. There are other ways to accomplish culling, and do it very quickly.. I don't think many AAA titles would do this (Aside from Wolfenstein II which was a mess, but they somehow combated this issue likely with portals), if any for this reason. Culling is an important optimization so simply "disabling" it isn't a great option, unless you have something like view frustum culling to fall-back on, at the minimum. Unity uses a complex algorithm, idk how it works they don't really say... but it's not asynchronous. UE4 simply uses the scene depth and the bounds of an object. (so more or less, view frustum culling.) (also synchronous) CryEngine simply uses frustum culling. I'm not trying to say you should model LE after those engines, LE is unique and great in it's own right. But theres a reason no AAA engines use this method, it's because it's ugly and it doesn't work! You can't call that behavior "working". Why is view frustum culling not favored? I guarantee you it's faster than iterating over ever pixel, bounding boxes and the view frustum already exist... not like you're allocating memory. And sure? what if an objects behind the wall? well... luckily the vertex shader will discard it after the depth test and it will never reach the rasterizer and therefore it'll never be an issue, performance wise. Sure beats the current method in the engine.
  18. That sounds very expensive, unless the culling shader doesn't have to create new wavefronts because the compiler can accomplish branch prediction and act on many pixels simultaneously, all you're guaranteeing is that it happens in linear time (again, unless branch-prediction happened.)... either way, even if it is faster, most people are going to have to disable culling because of this ugly situation, so in practice is it really "faster" if most people can't use it because it doesn't suite their needs? Even if you don't change this, it'd be nice to also have view frustum culling so you could quickly cancel out candidates that don't fit within your view frustum... then if they aren't in your view frustum you don't have to worry about going further with them because you already know that they aren't visible.
  19. 0.o That's kind of ugly, culling isn't very expensive of a process... maybe it'd be worth considering doing this on the CPU, of course on the main thread, in the future. Having to disable culling all together to prevent situations like this ultimately defeats any performance gains the odd user will see , because most will have to disable it. You will only see slightly less performance gains from the culling done synchronously... I'd bet the CPU culling on a decent sized scene would be negligible, almost immeasurable (I guess depending on the algorithm though). You could also eliminate things by distance, etc, make sure you eliminate them from this if they're hidden... try to "eliminate" the entity from having to go through culling in the first place and you'll probably not notice a difference overall. If it's simply view frustum culling using a matrix, this operation would be crazy fast no matter where you do it. I don't know how complex your culling algorithm is, are you just doing view frustum culling? either way the process should be fairly cheap... why not do it before the renderer draws each entity?
  20. That sounds like a perfect solution. You shouldn't have to do much for smart pointers to work with ToLua++, they are simply a class. I'd be surprised if ToLua++ couldn't handle them out of box. (Assuming you don't have the std:: prefix in the pkg files.) http://lua-l.lua.narkive.com/JEUvLxvs/tolua-question Looks like it'd be quite easy to come up with a solution.
  21. Yeah, but it doesn't have nil... so if you do something like access an out-of-range table element, instead of returning nil it will raise an exception... having nil is also what I rely on for my Lua callback system in my engine (although null might be distinguishable from false on the stack in Squirrel too). Squirrel looks like its come a long way since I last saw it, so I take back what i said. I'd go with Squirrel over Python. Still though, that's a big change for not a big difference. Not to mention I'm not seeing it being any easier for an auto-complete feature than Lua? You won't be creating your C++ classes in Squirrel, you'll be exposing them with t he stack, so it's not like you can parse the code files for auto-completion. If a switch has to be done thought Squirrel looks sweet.
  22. If you're going Squirrel, you're better off with Python... why settle for Squirrel when it's a worse version of the same thing? (Granted it is more light-weight) I think the Squirrel syntax is ugly relative to Lua or Python. I just personally love Lua because it is really, really fast and communication with C is extremely easy. It seems unnecessary to change something like the languages scripting language solely for the autocomplete feature. Plus could you not do a hack where you just execute the Lua file, silently ignoring errors all the time and then generate auto-complete info based on object's metatable which contains all t's methods and members. I feel like you could find a sloppy way to make this work using a separate Lua state... might not be the easiest solution, but likely easier than changing languages entirely.
  23. That would actually be really great to beable to draw widgets on the editor... maybe leave a little blank space for em Although I feel like this can be done regardless so long as the Editor uses LE's UI/Window system. Or if the widget sidebar was a scrollable panel, and you could add elements to it?
×
×
  • Create New...