Jump to content

Crazycarpet

Members
  • Posts

    283
  • Joined

  • Last visited

Everything posted by Crazycarpet

  1. I thought this were the case too as the 0 ms iteration time seemed odd, however I used n each time it was assigned. (via print, etc) and this is still the case.. Keep in mind generally the stl library will be fast as it takes advantage of tricks and ideas that have been thought of by many people before us over years. If we think of an optimization technique, odds are it's been though of by someone before and implemented into the C++ standard.
  2. Bad news! I found out for some reason optimizations were disabled in release mode. When I went back and enabled compiler optimizations lists took lead once again. Had a feeling something was off... the standard library containers are very, very fast for what they are. All in all though, good linked list implementation! you can certainly continue to make it faster while maintaining the added safety too. With <int>: STL Insertion: 68 LW Insertion: 69 STL Iteration: 0 LW Iteration: 15 Press any key to continue . With <MyStruct>: STL Insertion: 84 LW Insertion: 100 STL Iteration: 0 LW Iteration: 16 Press any key to continue . . . The code: auto end = stllist.end(); in place of just using stllist.end() directly in the loop can be reverted because the compiler automatically does this when optimizations are enabled.
  3. STL iteration is faster if you do: auto _end = stllist.end(); / Obviously BEFORE the Time::GetCurrent(); Than make your loop look like this: for (auto it = stllist.begin(); it != _end; it++) Otherwise you're creating a new iterator every iteration, which normally wouldn't matter but 1 million iterator creations adds up. This is where your version implicitly has the advantage, I bet if you made the test case use a smaller sample the stl container and the original way would be faster. STL container is faster because it doesn't do your error checking which is why it's almost immeasurably faster. So far I can't poke holes in the insertion performance improvements. There is a lot of memory leaks to be fixed though if I'm reading the code correctly. Edit: Note that when compiler optimizations are enabled for speed. (O2) the whole auto _end = myList.end() thing is done by the compiler automatically.
  4. I'd assume because the list of this type doesn't exist at the time, the concept itself is flawed. You'd need a list member, then in the constructor create a iterator for the list member. Edit: This is because the template nature. Edit#2: Same issue with the constructor, I don't know what I was thinking... this will be hard to implement look how the standard does it using the auto keyword. I'd really re-think your design before trying to make a custom container type to fight against the standard. C++ containers are the way they are for a VERY good reason. How about using (the now deprecated) std::iterator? http://en.cppreference.com/w/cpp/iterator/iterator See the Range example and how they create a custom iterator type.
  5. Josh is using a std::list, in linked lists the shift doesn't occur. In vectors it occurs to keep the data contiguous. Lists do not guarantee contiguity in any way. This is why he's using a list instead of a vector, he's not worried about fast access or iteration. He's interested in the constant time for removal in the middle of the list, where as removal in the middle of a vector as you said causes copies. +1 for the "There is no 'the fastest'" comment. More true words have never been spoken. Josh is comparing the difference between the underlying std::erase and std::remove.... std::remove has a higher complexity as it traverses the whole list til it finds the element that matches... where as std::erase uses an iterator position (or index in a vector) to remove at an already known location. std::remove is so much slower because the location of the item isn't known. Consider: std::list<Item*> myList; The function declarations look like: myList::remove(Item* pItem); // It has to find what element == pItem.. where as with myList::erase(iter<Item*> pIter); // The location is already known!! (p.s: I know iter isn't the underlying type, pseudocode.) Note that std::vector doesn't have a method that uses the underlying std::remove method although it'd be easy to implement. This method would likely be faster in a vector due to the contiguous nature allowing for range-based loops without iterators.
  6. This looks pretty done! planning to update the "Learn" button in the near future so it takes you to the new API reference? It would also be nice if Search was at least updated to prioritize classes. For instance if you search "Asset" the Asset class is shown 3rd instead of first, which is kind of ugly.. I personally think a quick-search would be nice that didn't take into account tutorials but searched for 1st classes, then methods, etc. Or sort into search results for tutorials, classes, methods, variables.. Overall great work though, huge improvement.
  7. So I know this is like the 3rd one I wrote... but I use ToLua++ in over 4 of my projects (1 not related to LE) and writing/updating pkg files is a tremendous headache so this 2nd attempt at a ToLua++ binding generator solves all my issues, finally. https://bitbucket.org/Codeblockz/toluapkggenerator2 Same as my first one except it has many naive flaws fixed, detects ends of class/namespace scopes to prevent some past bugs caused by the generator not knowing when a class declaration ends. Also adds support for nested namespaces. I.E: namespace NS1 { namespace NS2 { namespace NS3 { void DoStuff(); //lua } } } The Lua usage would be: NS1.NS2.NS3.DoStuff() It also adds support/improves the handling of many misc situations. Enums *Remember that enums ignore namespaces* Example: namespace SomeNamespace { enum MyEnum { //lua EnumOne, EnumTwo }; } Lua usage: MyEnum.EnumOne MyEnum.EnumTwo Note how the namespace is ignored. (NOTE: I don't THINK this would work on the Leadwerks base-code til I apply the ToLua++ bug workaround that I did to ToLuaPkgGenerator 1.)
  8. Yeah this big flaw plagues Unity engine as well, I think if Josh came up with a solution it'd be a nice leg up over Unity in that regard.
  9. Off topic, but where can I find ICE physics engine? The one from the PEEL 1.1 version. Edit: Nvm, found it... lol
  10. I would imagine they have it configured to destroy the host using the the lua garbage collector.... What you can try doing is printing out the "enet" table and see if it has "enet.host_destroy" as that would follow the general style they're going for since the C++ version would be like: enet_host_destroy(components.host); If it's not there you could definitely expose it yourself (push it to "enet" module.) then manually handle deletion... or even not use lua-enet all together and generate your own bindings with toLua++ and a pkg generator like the one me or Josh released on here. (think would require adding //lua comments to the enet header files.) (Then you'd have to ensure you manually destroyed the host though, as it wouldn't be config'd to work w/ the GC.) But yeah if all references to the host become nil it should delete itself local host = enet.host_create"localhost:6789" host = nil -- the host will now be garbage collected. .
  11. Well "The shadows checkbox in terrain/vegetation was not checked" certainly suggests he's using the terrain tool, so it's a safe bet that's most likely his problem.
  12. When he talks about the incorrect version of PhysX I think he's referring to the change in 3.4 where they completely redid how PhysX handles rigidbody physics.. this was done not long ago and differs from earlier versions of PhysX 3.4 as described here: The version in PEEL 1.01 was (I'm fairly sure) a preview version that handled rigid-body differently and as described in the video, it was not a very good design. The PhysX 3.4 in PEEL 1.1 is extremely different from the one in PEEL 1.01. @NewtonDynamics Any chance you could post your Newton implementation for PEEL or fork PEEL so those who are curious can try out your version, even if it's not on the official branch. (Using the PhysX version from PEEL 1.1)
  13. Thanks, didn't even think about using a pick operation.
  14. FileSystem::StripDir shows "Failed to load page data. " Camera::SetFOV to Camera::UnProject is still wrong as well as Tumira already said.
  15. As the title states, is there a way to detect the path or name of the material the character controller is standing on in Leadwerks.
  16. That really would be nice, as no matter how hard you try most likely the solutions implemented aren't always going to be the "best" possible solution for each physics engine. I don't think anyone is capable of knowing all the tricks and details of every physics engine. Is this planned at all or? The reality is for a test to be fair it has to be open to everyone in every way, so the results can be falsified. But again, I think intentional bias is unlikely.
  17. Intent aside, Newton performs great in PEEL and in many cases is more accurate than PhysX and on my computer that lacks a discrete GPU faster as well... I can certainly post the excel files if anyone isn't getting the same results. What I'm more skeptical about is Bullet, I haven't worked at all with Newton so I can't speak on it... but I've done quite a lot of work with Bullet and I've never seen the funky behavior it has in some of the PEEL demos. None-the-less I don't think PEEL intentionally aims to make any physics engine looks bad, however it does seem to have a bit of a bias towards PhysX as merely tweaking a few values can make the other engines provide the same results (well at least in Bullet). That being said it's also not fair to dismiss the results of PEEL as it's probably the most accurate test out there that compares such a wide array of physics engines. I feel like Pierre is most experienced with PhysX (correct me if I'm wrong) which would kind of implicitly cause a slight bias in the results. But I also think he makes this clear in his statements with comments like: It can't be all bad I mean I personally disliked Newton for all the wrong reasons until I saw it in action in PEEL and I'm sure others who haven't heard about Newton in the past would feel the same way. That being said it would be nice to see an up to date comparison of a test between PhysX 3.4 and Newton 3.14 that everyone can agree on as being "fair".Maybe in the near future this could happen?
  18. First off amazing tool, I've been playing with it for days on end, and now with the new version. While, I know there's lots of speculation on performance regarding PhysX when it's "CPU bound' but I get results that suggest that's very true when using PEEL... any chance you could shed light on this? When I use my computer with a dedicated GPU PhysX generally outperforms them all. (In average case.) However, when I use my laptop that has on integrated chipset, Newton generally outperforms them all. (Again, in average case.) and by quite a large margin too. It seems to be the case for the vast majority of tests as well. (My last post was wrong, there's only a few demos where on my laptop PhysX performs the best.) I'm just curious as to why PhysX generally performs so much better on my high-end machine, where as Newton although it performs better on my high end machine it doesn't take an immense jump in performance like PhysX... Is PhysX that much more taxing on the low-end CPU? PhysX also simulates rigidbodys on the GPU according to the GDC presentation. Would this not suggest that a CPU bound PhysX would be slower? Is it fair to say PhysX performs significantly better on computers with discrete GPU's rather than ones with integrated chipsets as it uses GPU simulation? Maybe the term "CPU bound" is inaccurate to describe what I;m describing.
  19. Is this on the beta branch? the demo works fine on the main branch.
  20. Oh for sure, I wasn't disputing whether or not he knows his sh*t I mean clearly he's extremely talented. I was just saying that I wouldn't take a test like that at face value as I didn't realize what PEEL was at the time, I thought it was a self-made demo. Certainly not trying to say anything negative about a programmer who's capable of creating such a robust physics engine. Neat video though.
  21. Well can't argue with that, PEEL shows via those demos that Newton is truely the best in those demos, on all of my computers which have very different hardware.... The Bullet physics results in most tests are disappointing. However overall you've got to hand it to PhysX in most scenarios it blows everything else out of the water... Even on my computer without a dedicated GPU. I've got to say I never thought Newton was THAT good of a physics engine, very underrated i must admit. You do tend to pay in most demos in performance for Newtons accuracy though. The results I got from running peel showed that: - Overall, Newton gives PhysX a run for it's money in accuracy in most cases although there are still quite a few where PhysX surpasses it. (Both great physics engines.) (Will link the tests PhysX wins in a sec.) - When it comes to accuracy of simple rigidbody collisions Newton always is the most accurate by far. - Overall, PhysX performs the best. - Bullet gets blown out of the water when it comes to accuracy but slightly outperforms Newton in most cases. (I'd take accuracy over performance personally, and it's not like the difference is big.) - I still can't reproduce the failing soft-body chain on any machine, it works for all engines on all my PCs in PEEL. Of course this is a big generalization but overall, this is what I noticed... those demos from Julio are correct, can't argue with that. Another thing I found interesting about this demo is the fact that on my computer with an on-board chip operations like raycasting are EXTREMELY expensive and PhysX is by far the slowest.... but on my computer with a powerful dedicated GPU these same operations are by far the fastest on PhysX. Will definitely be using Newton in my next project.
  22. Thanks, will check it out I'm curious now. This has been updated to the newest Newton version right?
  23. I know, I'm trying to make a point though that the tests are biased... and that video really does show it, like lol... neither Newton nor PhysX fails in the situations it fails for in that video. One thing I will blindly concede to though is the "high mass ratio" test, at least when it comes to Bullet physics, this situation is handled poorly by Bullet, but on the flip side how often will you have a situation like that? Doesn't make a lot of sense to have in the real time simulation.. I'd like more to see a test of an object of huge mass accelerating into an object of lower mass. This is the behavior you'd see in real time and it'd be an interesting result to see! The video also makes reference to "their own test" this implies that Bullet/Nvidia had a part in programming the implementations for these tests and I'm fairly sure they didn't. This video shows it better: The stack doesn't simply collapse after it's activated! even missing half it's pieces it maintains it's integrity... many of these tests have source code in the description too and you can see that in many cases the physics isn't actually disabled at any point. The "fluidness" you see isn't the structure collapsing because it was just activated, it's the force transferring from the heavy balls through each rigidbody object causing the structure to collapse in a quite physically accurate manner. Like I said, those tests show that Newton is great with rigidbody (and experimental softbody) nothing more... to make judgements about Bullet or PhysX from them is just inaccurate.
  24. This is simply not true Josh.... Anyone who's ever worked with Bullet or PhysX can attest to this fact. If the physics engine wasn't capable of that it'd be garbage. And how will you argue with the softbody that seems to (and does) work flawlessly in a VERY old version of Bullet, yet in Julio's test with a newer version it somehow fails from a slight tug on the soft-body chain? That's just not correct... I do agree with your point that they may have physics disabled to a point sure, but the second that ball starts moving that stack is active, and if its not... there's still hundreds of examples where no collision occurs and they stay balanced I can find you them if you'd like. You can't spit on all these AAA physics engines with proven track records because of one video that is undeniably biased.
  25. It's fairly hard to find comparisons against Newton I'll give you that, but you have find comparisons between almost every other physics engine except Newton because they're more popular. I'm not taking away from Newton or saying it's not great... it is. I'm saying it's naive to say it's the best because of a demo that was released by Newton Dynamics itself. Am I saying Newton isn't more stable in many scenarios? No... am I saying it's not more stable in ALL scenarios, of course it's not. What I'm saying is that this test proves nothing in that regard other than the fact that Julio has made a great, competitive and stable physics engine. The only tests I can find between PhysX and Newton were carried out by Newton developers and the source code is not provided, there's nothing scientific about making a statement like that when it's nothing more than hearsay to say it's true. I'm not trying to say either way what the results would be of a more "fair" test, I'm simply saying this isn't it, and if it is provide people with the source code so they can see for themselves this is the case. As you can see in any demo ever made like this one or the next one, or countless others: (Last video is a soft-body stress test, the chain holds up! ) The things Julio's test shows Bullet and PhysX as failing these tests, hundreds of other demos show the opposite. The tests fail in Julio's implementation, not because of the underlying physics engine. I'm no expert but those Keva planks look fairly stable and balanced to me, and they're stacked much higher than in the other demo.
×
×
  • Create New...