Jump to content

Pierre

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by Pierre

  1. You lost me entirely. I need to sleep.
  2. Julio I am sorry for the LOLs and WTFs and maybe losing my sh*t a bit. John tells me you're a nice guy in the real world. Maybe this would all work a lot better if we'd meet face to face. I have nothing against you or Newton. But seriously man, I don't understand you.
  3. glad you are coming about, we are making progress here LOL. I am in the Twilight Zone. You posted a link to your own forum where I already wrote I would do that. I never denied it. I told you about it before doing it, as we already discussed in this very thread. you added the enum but then you went to the newton library and deleted the support code and replace with a tessellated convex hull. LOL. No. http://www.codercorner.com/blog/wp-content/uploads/2017/05/wtf.png Newton does use a proper cylinder.
  4. The 3_4 files are for PhysX 3.4. The CL_XXXXX indicates the changelist used to build this specific binary. You were entirely right in a previous post that both PhysX and Newton are moving targets, that two binaries with the same name can be separated by years, and this does create issues and ambiguities here and there. I think you actually wrote we should rename the DLLs: that's exactly what we did. PEEL 1.1 contains a single PhysX 3.4 binary (the state of the trunk at time of release), but at home I support dozens of different "PhysX 3.4" versions, captured at different points in time. This is to track performance regressions, etc. They all have a different changelist number. I don't know if this answers your question.
  5. You are not off the hook. I request that we go to the bottom of your previous claim before looking at anything new. In fact here's a deal: if you can actually prove that my changes to your integration code made Newton slower or behave worse, I will take the time it takes to put back that old scene of yours with extra shapes.
  6. it was until you removed the features you did not like. People can play my archive an see the primitive demos. they sure show cylinders and convex hulls. it was you who removed the cylinders from the archive I gave you, alone with all other shapes and joint types. Yes, I did remove the extra shapes and extra joints. We went over this N times. I don't have time to maintain a feature that would only be exposed in Newton. This would be best demonstrated in Newton's samples. In any case this is not what I am asking for. You are now giving me a test that only exists in your version. I am looking for a test that exists in both, and performs or behaves better in your version. You cannot name one because there aren't any. There aren't any because contrary to what you claim, my changes to your integration code did not affect the performance or behavior of Newton. this is what I, and I think I am not wrong assuming any other honest person, would also do. -I would add a primitive type cylinder, OMG did you even look at the code? I already have one! See the cylinder enum, struct and cap bit here: https://github.com/Pierre-Terdiman/PEEL/blob/master/PEEL/Physics/Pint.h -them in the caps since Physx do not do cylinder I would emulate them with a tessellated convex hull of any degree. I already emulate cylinders with tessellated convexes for PhysX. -let other engines represent cylinder the best way they can. This is already what happens! But that is not what you do Yes it is!!!!!! PhysX uses a tessellated convex, Newton uses an implicit shape. This is a penalty FOR PHYSX !!!!! WTF you see that If you did that then PhysX will look bad both in quality and performance so you answer is to make every one else carry a penalty. Ridiculous. The performance penalty is for PhysX. There are also scenes where Newton looks better than PhysX (e.g. the high mass ratio one) and I certainly didn't remove them or try to hide them. They are still here in both the old and the new archives. Why do you think people in this thread, who looked at both, chose Newton? This accusation makes no sense.. Also, these scenes are useful for us. They show where our weaknesses are, and what we should improve. See the InitialPenetration scene for example: it used to explode in PhysX, and we did NOT hide it. But we kept an eye on that, worked out a solution, and now it does not explode anymore. I do not remove all the tests where PhysX does not "shine". Now cylinder stacks, tires and anything requiring a round cylinder of high quality has to pay the same high penalty than PhysX does. This is complete and utter bullsh*t. The cylinder stack uses a cheap implicit shape for Newton, a highly tessellated convex for PhysX. The penalty is for PhysX. This is not just one test, they are many more alike mr Pierre. I'm still waiting for you to name a single one. Keep a copy of your old version. Overwriting it would be a convenient way to erase the fact that none of the tests in there actually behaves better than in the new one.
  7. We already discussed this test on your forum. You know all the reasons why the cylinder test is like this. This test did not exist in your PEEL-master branch, so it cannot be an example of how my changes to your integration code made things worse. Try again. (This isn't even true anyway, if you click on the checkbox in the UI, everybody uses a highly tessellated convex. Wrong again!)
  8. Oooook I give up guys. Records will show that I did try my best, and that none of this is on me. Ciao.
  9. I never ever presented myself as the "smartest person". Please show me a single example of that. You often do random statements like this without ever presenting proofs. (Dirk is still waiting for your answer at the end of this thread: http://newtondynamics.com/forum/viewtopic.php?f=9&t=8836&start=15) I'm sure I would figure out GitHub if I had the time or motivation to look at it. I just don't have neither of these. If you keep deflecting the discussion into wild random claims like most of the Internet does, instead of simply pointing out one single test that would prove your claims, I will have to conclude that I called your bluff and that your statements were in fact incorrect. Please prove me wrong. Please name one test. I am not asking for much. Or anybody else maybe?
  10. Sigh. Can anybody else explain this to me? Was my previous post unclear? I already gave you an answer to that. Maybe I can complete it here: a) This is not the root of the problem. I used GitHub like I could have simply posted PEEL on my blog or something. I did not "manipulate" the files to change the results so this is not the thing we should fix. This whole thing is a red herring. b) I am not familiar with GitHub used as a source control tool. If this "commit right" means that anybody can submit anything without a review and the ability for me to accept / reject the changes, this is not going to work. People would break things all the time. It already happens at work, I don't want the same sh*t in my personal project. For example they would break the Havok plugins because they didn't bother compiling them. I have about 40 different PINT plugins, nobody's going to compile / test them all before submitting their breaking changes. It takes a lot of work and a lot of efforts to keep everything working for all plugins, even for me. c) If this "commit right" allows me to review / accept / reject changes, it is not different from sending me the files directly. On the other hand if I take too long to review a change, people will get upset. I don't have a lot of free time and I sometimes don't touch PEEL for a year (as you probably saw yourself, it took forever for your changes to appear in a new PEEL version). If I do a review and find something questionable, I certainly don't want to start never-ending threads about it on GitHub. I don't have time for this overhead. d) The "trunk" in GitHub is not the real trunk. It misses a bunch of other engines. I would have to constantly fetch changes from GitHub and manually merge them to the master version on my home PC (which is not even connected to the internet). I don't have time for this extra step. It is a lot easier to apply the changes to the master version only, and then upload that to GitHub once every year. e) You are the only one complaining about this. Other people did their own branch, submitted their changes, and it worked. I am not eager to change anything before we explore other avenues and try to resolve this another way, which I think would be easier. I also do not understand your point about compiled libraries that you have to delete. I am not sure which libraries you are talking about, or why they are a problem. In any case, forget this GitHub stuff: it is simply not going to happen. But I am very eager to go to the bottom of this problem: I did just that, and I don't see that the tests run "much better" in your archive. Can anybody else reproduce this? Seriously, let's stop fighting here a bit and let's fix this one. If there is a difference in performance or behavior, I don't see it, and I can guarantee that it does not come from the changes I made (which were just removing unused functions, and extra shapes / joints that PEEL did not support). So please let's focus on one problematic test and let's look at this in detail.
  11. I would like to go to the bottom of this one next. With all readers here as neutral witnesses, I will apologize to you if my changes to the Newton plugin integration code you sent created any performance or behavioral regression. And I will then immediately fix the problem in the published PEEL on GitHub. Deal? So, first, please tell me which test I should be looking at.
  12. One thing at a time, guys, or we'll never see the end of it. There. Good. Excellent. So do we agree that my initial statement, which was: ...was factually correct? Do you understand why your initial answer to this statement, which was: ...is thus hard to comprehend? My initial statement was simple. We do not need to bring a whole "chronology" of past events, or PhysX 3.3.4, into it. I only wanted to point out that the PhysX 3.4 used in your videos was the trunk from 2 years ago. A lot of CPU optimizations were added since then (which is usual, it happens all the time), but also a full pipeline rewrite for the GPU bodies (which is unusual, we only did that once in the whole lifetime of PhysX 3.x). It was legitimate to point out that public videos posted in december 2016 showing "PhysX 3.4" are not actually using the PhysX 3.4 official release that we did just a month later. Anyway I think we are on the same page now: that statement was correct, I was not "confused". May I move to the next item?
  13. You are scary. You seem to have built a complete delusional story that fits your narrative without any regards for facts. I know alternative facts are all the rage these days, but still.... Maybe let's try another approach and debunk them one by one, depth-first rather than breadth-first, randomly starting with this one: What I said was: So I mention two things: 1) the initial videos posted in this thread, and 2) your PEEL master branch. 1) Let's have a look at the video first, for example the first one: a) Do you agree that this video does show PhysX 3.4? b) Do you agree that the PhysX 3.4 used in this video is not the one released in PEEL in January 2017? c) Do you agree that there is no PhysX 3.3.4 in this video? 2) Let's have a look at your PEEL master branch. a) Do you agree that it does contain PhysX 3.4 DLLs? b) Do you agree that they are not the ones released in PEEL in January 2017? c) Do you agree that there are no PhysX 3.3.4 DLLs in this branch? Please answer just "yes" or "no" for each one. I know I should just let it go but it's like a social experiment now. I'd like to see if we can agree on easily provable facts, or if even this is going to be controversial.
  14. The one constant here is that all the mistakes happens to favor physx and penalized every one else I am not sure which "mistakes" we are talking about here, but people only notice the bad stuff anyway. They ignore all the things that are done properly. So of course, if you only ever mention the problems, you end up with a biased view. Much like with the media only reporting the one plane that crashed, never the thousands of planes that did not. you were presented with correction in more than one occasion I did use the "correction". I have no idea why you keep denying this. If I compare the code from your own PEEL-master branch and the one I have in PEEL 1.1, I even see optimizations that I added to make Newton perform better: // PT: added this to discard kinematic-kinematic pairs. Without this the engine's perf & memory usage explodes in some kinematic scenes. { const PinkRigidBodyCookie* BodyCookie0 = (const PinkRigidBodyCookie*)NewtonBodyGetUserData(body0); const PinkRigidBodyCookie* BodyCookie1 = (const PinkRigidBodyCookie*)NewtonBodyGetUserData(body1); if(BodyCookie0->mIsKinematic && BodyCookie1->mIsKinematic) return 0; } Why aren't you ever mentioning that ? and you said you were going to integrated my implementation And I did. That's why the Newton plugin's code is very different between PEEL 1.0 and PEEL 1.1. them you silently closed the topic as if it never happened. I closed the topic because it was resolved. I did use your latest version. If what you have in your PEEL-master branch is actually not the latest version, then I don't know where to grab it. I never got anything more than this. Not only you closed the topic you went over my integration and changed in a way that any user would think I was the one who made the changes to my own integration. No. We already discussed this on your own forum. You just ignore my explanations and constantly come back to this. You are the only one thinking that somehow "//JULIO" is worse than "//PIERRE". I used this comment to mark the places where I was removing the code that was not compiling or not called anymore. I never thought for a second that anybody could interpret it the way you did. And even if people think that you are the one who commented out these bits, I still have no idea why it matters. They would not be used even if they would be put back. I have no idea why you're upset about this. you when out of your way to removed the changes you did not want and you place my name of them, can you explain why would I add a comment to remove functionality? I already explained everything several times. It is still right there on your own forum: http://newtondynamics.com/forum/viewtopic.php?f=9&t=8839 This part: the new shapes should be implemented for all engines supporting them (Havok, Bullet, etc). I don't know how much work that is, so I may remove this for now and keep that for a later version. Same for the new joints. I told you exactly what I would change before doing it. I invite people to read the whole thread there and see for themselves. to me that kind of calibration look like voodoo magic, but maybe you have a more technical term for it. First, this is a trick done entirely on the user's side. This is not something built in PhysX itself. Nobody is forcing people to use this trick. Most PhysX games do not. PEEL 1.0 did not do any of that, and it didn't prevent PhysX from working. Thus, using this example to show that PhysX uses "magic numbers" is wrong. Second, as I already explained on your own forum, this is simply an alternative way to increase the number of solver iterations. The number of solver iterations is not a "magic number", it is a legitimate and reasonable parameter coming from, well, iterative solvers. At its core, this is simply an optimization. The direct way to do this would be to increase the number of solver iterations. But in PhysX it affects all objects contained in a "simulation island", which is bad for cases like this: In this case, where a jointed object is in the middle of a large simulation island, the trick I presented gives better performance, because it limits the increased number of iterations to the jointed object itself, without affecting the debris / rocks around it. It allowed us to run the scene on the GPU with good performance. You can call this "voodoo magic" if you want, but this isn't really more questionable than a solver iteration count. for example in the toroid case FixedJointsTorusMultipleConstraints and FixedJointsTorus you applied same hacks to all other engines that you apply to physx. I have again no idea why this is a problem. The FixedJointsTorus scene does not use the "hack", it is the same as before in PEEL 1.0. It does not penalize other engines. Then FixedJointsTorusMultipleConstraints uses the hack, and the test description is pretty clear about it and nicely explains what this is about. Of course I run this in "all other engines", since that's the whole point of PEEL. This tool is made to investigate how different engines react to the same setup. And PEEL shows that it also works fine in Bullet, for example. It is perfectly legitimate and justified to try this in all engines. don't you find a least disingenuous that a test that is supposed to improve quality act just the opposite? It works in some engines, it doesn't work in others. What am I supposed to do about this? Remove the entire idea and tests from PEEL just because Newton reacts badly to it? With all due respect, that doesn't make any sense. why are we obligated to support the PhysX hacks. You are not "obligated". I did not support Newton in PEEL originally (see e.g. the list here: http://www.codercorner.com/blog/?p=748) You complained about that. I only added Newton because you complained. But if you prefer, I can now remove Newton from PEEL for the same reasons. Beyond that, the "PhysX hacks" are useful in more than just PhysX (I even got some of them from the Havok samples) so it is perfectly reasonable to try them in all engines. so your solution Is that other engine must resolve four, eight and some time even 32 times the same joints load so that PhysX is still faster. All the engines have to resolve N times the same joints in these tests, yes. That's the purpose of the tests: to see how each engine reacts to this approach. This is not to make PhysX "look faster" (?), this is to show how each engine reacts to a given strategy. If an engine like Newton does not need to use this approach (e.g. because it uses a direct solver for joints), then this is going to be visible in the version of the test that does not use the hack, or when unchecking the appropriate checkbox in the per-test UI. Your users are going to see this, they are not dumb. In this very thread you have one guy who played with PEEL (both your version and the latest one), and concluded that Newton was best for his use case. PEEL does not push people away from Newton. This very thread we're in shows you the opposite. I suggest you focus on this proven, real benefit from PEEL, rather than focusing on imaginary drawbacks. you do same thing for other demos where every one is using a normal initial mass matrix you hard code PhysX to have the Inertia artificially multiplied by ten and still PhysX is less stable. There should be a cap bit for the inertia tweak. Engines not supporting the feature should not be able to run the test. Inertia tweaks are part of the bread and butter of game physics. It is legitimate to test these things. In any case if "PhysX is less stable", I'm not sure why you're complaining. That does not penalize Newton, does it? credit for what for spending a generation misrrepsenting every body. you call me aggressive, but we you objects is to be exposed in you hypocrisy. What? Am I the only one who thinks that 2013 come before 2015? What? Beside the archive that is on the download links Newton 3.13, so in any case in comparing and older version of newton to all versions of PhysX up to 3.4 (work in progress) What? would you admit that in am case if better that all version at that time? but you can't admit it can you. you excuse is that I am using a lees that latest version, but you have been said the same since physx 2.8 haven't you? What? I'm afraid you lost me again. I was only pointing out that your video uses an old version of PhysX 3.4. I have no idea how what you answered relates to this. that would be a fool errand, I already provided tow versions. one you have in your source control, and one I poste in your GitHub and you ignored it. why would I do the same all over? I did not ignore it. And if you don't want to submit improvements, don't complain about the state of the code. Like I said before if you were honest, you would allow for people to be contributor to the GitHub source control. And as I said, you're already a contributor. I have no idea about the "GitHub source control" stuff, I only use GitHub as a way to share the source code with the world. Never used GitHub before. The PEEL trunk is not on GitHub, it's on my desktop PC at home. That's why the GitHub depot has only 6 commits. I wouldn't even know how to "allow people to be contributor to the GitHub source control". But it probably wouldn't work anyway because half of PEEL is not actually on GitHub (the Havok binaries, other engines that I'm not supposed to release publicly, etc). I have to make all contributions somehow "work" with these other non-public bits as well. That does not prevent people from sending contributions and improvements (as Erwin did for Bullet). Instead ask for people to give you contribution so that you can pick and shoes what to present. Nonsense. That's exactly why I added the //JULIO comments. I could have silently removed these pieces of code, but I left everything as-is to show what was left out. No matter what I do, you complain. wrong again, with newton 3.12 and 3.13 I briefly experiment with iterative solver but I realized that was a big mistake so I went back to the exact solver only solution If Newton 3.12 and 3.13 did use iterative solvers, then my comment was actually correct. How am I supposed to know that things are changing in 3.14? It's not released yet. And there are no release notes anyway. Besides.... while leaving the contact to the iterative solver. ...you still have an iterative solver for contacts anyway (!). So what I was saying still holds. I don't know why you feel the need to refute absolutely everything I say, no matter how ridiculous it sounds. but I understand enough to figure out those two statements contradict each other. Errr. No they don't (?). Your PEEL-master branch does contain the PhysX 3.4 DLLs from PEEL 1.0. But it does not contain the PhysX 3.4 plugin source code (or their PINT binaries). So when you run the EXE you are only presented with PhysX 3.3. As for your video, you don't say how you created it, and the source code is not available AFAIK. But the name of the PhysX 3.4 plugin ("PhysX 3.4 trunk") shows that this is not the one that was released in PEEL 1.1. It is thus very likely the one available in your own PEEL-master branch. Hence, my statements do not contradict each other. when you released Peel in GitHub two years later, I downloaded to get the latest but I was not able to use any of the new stuff because Peel comes with a set of compiled libraries that prevent any end users from extending it to make new test What compiled libraries? I don't know what you're talking about. Many users downloaded it, added plugins for their own engine, and created new tests. Dirk Gregorius at Valve did a plugin for his Rubikon physics engine. Havok guys did, for their latest (non public) versions. You yourself sent me new tests that I added to the latest release (like the gyroscopic test). If there is some issue with the latest release preventing people to add their own test, that's the first time I hear about it. because that was a dll you put there the same day. I have no idea what you're talking about. What dll? It also debunks the statement that you never use Newton because you not understand it, I mean you are saying you looked at it hundreds of time, how could this be? Looking at the PEEL results comparing Newton & PhysX in that branch does not magically teach me how to use your API. I do not trust a word coming out from you (shrug) That's unfortunate but I am not going to lose sleep over it. I tried to work with you, I failed, and now I'm sick and tired of this constant BS. (Readers: don't worry, this has been going on since at least 2007, see e.g. http://www.bulletphysics.org/Bullet/phpBB3/viewtopic.php?p=&f=&t=1334). Decide for yourself who should be trusted. While I'm at it, I'll share with you something I never showed you: what one of your own users on your own forum sent me in private: Hi, I Have been reading the latest posts on the PEEL integration issues, and I feel like I should say something publically, but I dare not enrage the Beast any further ;P so I just wanted to let you know you were amazing there! Julio is, in my experience, a bit difficult to deal with in the forums (I don't know him personally). It seems to me he misinterprets things quite easily, and his writing is sometimes 'less than clear', but I had never seen him go so full ballistic over anything... And yet you remained a true gentleman as long as you could (I would have snapped or quit much earlier). Anyway, I'm sorry it ended up like this, but I think you gave it all you could, and hope Julio can improve his attitude in the future (for the sake of all...). Well done, and cheers! I am not the problem. In any case this is going nowhere so I'll stop here. Good luck to your users.
  15. Ok, so it is not a bug but a feature. It's an easy mistake to make since: - as you pointed out, it's not the first time it happens. This stuff was broken in the past, before you fixed it. - the API to control the sleeping behavior is still there unchanged. But when upgrading from 3.13 to 3.14 the behavior changes a lot in that respect, and it looks like the call has no effect anymore. In any case the net result is that I cannot really test the performance of Newton against the others like I did before.
  16. Linear-time solvers, i.e. iterative solvers, as opposed to the old-school exact solvers like the LCP-based ones from Baraff. I am not assuming anything. Bullet, PhysX, Havok, and recent versions of Newton all use iterative solvers. Newton still has an option to enable the exact one, but this is not the default IIRC. GS = iterative = what I meant Can you please stop? Your own API had parameters for the number of iterations, etc. What? That's not at all what I said.
  17. I encourage people to read the full thing and decide for themselves if it is comparable. (It is not) That is just ridiculous. I never got any 3.12 integration from you, to start with. I did that one myself from the 3.13 plugin you provided, long after the 3.13 one was done. I wanted to see if there was any difference. And regardless, I am not supposed to speak about Newton in all my posts, am I? And even if I "forgot", so what? Are you mentioning the tests where PhysX in faster in discussion involving two other unrelated engines? What the hell? I never claimed such a thing. You keep repeating this but you never provide any source. I just never claimed that only PhysX could do X or Y. Especially when Havok usually performs just as well or better. I'm afraid you lost me again. I provide the source code for everything, no idea, no idea what you mean. You are already a contributor. I am not the one who wrote the Newton plugins. I only removed offensive comments about the "abomination of PhysX materials" (sigh) and bits that were not supported by other engines. I did not affect the Newton performance or memory usage. You can tell me what to change or submit new versions of the code, as I wrote a zillion times. You can also grab the whole thing on GitHub and do your own version, as you (and others) did. Not sure what more you want
  18. Because I released a preview of PhysX 3.4 as part of PEEL 1.01, on April 8, 2015. See here: https://github.com/Pierre-Terdiman/PEEL/releases If you download your own PEEL-master, you will see that it contains the same DLLs, from the same day. As I was saying, this is an old version of PhysX 3.4 that contained regressions (as noted in the PEEL readme at the time), and was also very different from the one released in January 2017. The whole pipeline has been revisited and turned upside-down for the GPU rigid bodies since then. Thus, I'm afraid the confusion is not on my side. I suggest you read John's email again, copy-pasted in this very thread. It said right there that Newton was better for some things. So I have no idea what you are complaining about. (For readers: this has been a recurrent pattern in my discussions with Julio, most of the time I'm lost, I don't understand what he wants or what he says). In any case if I run your own PEEL-master branch that I just downloaded again, and compare Newton 3.13 to PhysX 3.3 (the most recent versions in there), I see the same as the hundred times I've looked at it, and the same as what people already reported in this thread: in some ways Newton is better, in some ways it's PhysX. As I explained before in this thread again, there is no "best" engine, it's a multi-dimensional thing. I am not sure what more you want me to say. This is just incorrect. I know nothing about Newton. The only time I tried to write a Newton plugin for PEEL myself, I couldn't even get the collision detection to work (sorry but there's no doc and the API is not super clear to me). As a result, so far, all Newton plugins came directly from you. It is your own integration. As I wrote N times in the past, if you don't like it, just send me a new one. As explained by John Ratcliff on GitHub in a link that you posted yourself (https://github.com/Pierre-Terdiman/PEEL/issues/3), the Newton integration provided in PEEL 1.0 was the last one I ever got. You complained about it (see that link). I went to your forum, you sent a new one, released in PEEL 1.1. And you complained again anyway, for reasons that I never fully understood. As for my blog post, it was written before the first Newton integration in PEEL happened so I have no idea why you bring that up. It was mainly an answer to the online claims that "PhysX is crippled". It has absolutely nothing to do with Newton. This is weird and surreal to me: what would you want me to have written in that blog post, before I even used Newton for the first time? It is irrelevant anyway: am I not allowed to claim that PhysX is the champion if I want to, in exactly the same way you constantly claim Newton is? I truly don't get it. Because as I showed above, you are indeed using an old version. And the words are important: I did not "accuse" you of it, I just pointed out this fact. Not looking for conflicts here. I certainly encourage people to read that one, for some context and background. I don't think I said anything bad in there, but I will let people be the judges of that. As I wrote N times on your own forum, including in a thread that became so bad it got deleted, I suggest you just send me a new one instead of insulting me all the time. If this is too much to ask, please politely point out one thing that I should change in the current code.
  19. Please look at the Bullet plugin's code and suggest improvements. I am certainly not an expert with Bullet, and what is provided in PEEL is a "best effort". I asked Erwin (Bullet's main author) to send improvements but he only contributed some minor tweaks that got integrated but didn't really change the results much. So far my main problem with Bullet is that when I change the settings to make a particular test behave better, it makes things worse in some other tests. Indeed, if there are some issues and mistakes they're not intentional. No matter what people claim. It would be a weird strategy to make things intentionally bad for engine X, and then open source the whole thing. Yes, that was what I saw as well for a few of them. But as mentioned above, the tweaks I could find for a given test were usually different from the tweaks needed in another scene, making each change questionable. In any case, please suggest any improvements to the Bullet plugin, they're certainly needed. Ah yes, if you didn't know, I am an Nvidia employee and I wrote some parts of PhysX. I also wrote half of the NovodeX SDK back in the days. PEEL is certainly slightly biased in that respect, since I know PhysX a lot more than the other engines. However, this is the reason why it's open source: please suggest improvements for the other engines. They are very welcome. I can only hope that Julio will pay attention to this bit and give me some credit. I tried multiple times, but no matter what I do Julio is increasingly unhappy and aggressive. I kind of gave up now. I was last considering removing Newton from PEEL entirely, since I only ever got insults and unfounded accusations out of it. I suspect this is going to happen again in this thread, unfortunately.
  20. I am not sure I entirely follow your questions but let me try to answer them. First, just so we're clear, by default (if you just run PEEL and don't tweak anything), everything uses the CPU. Nothing runs on the GPU (except the basic OpenGL graphics of course). In that mode, as far as I saw, there is no major difference between running on a desktop PC or running on a laptop. The relative performance of each engine should remain the same. If you do see a difference, then this is something I didn't see myself and cannot explain. Some things that come to mind nonetheless: - make sure both the desktop and laptop use the same "power plan" (in the control pannel). There's usually a "high performance" plan which is better for running benchmarks, as opposed to e.g. a "power saving" plan. - make sure you are not using more worker threads than the number of physical available cores. For example if the desktop PC has 4 cores, the laptop 2 cores, and you run with 4 threads in PhysX, maybe it creates some issues. - try to run one engine at a time instead of both at the same time (or use the Fx keys to disable an engine). Maybe the first one takes a huge amount of cache misses compared to the second. Alternatively there is a checkbox to randomize the order in which engines are used. - try to disable rendering, just in case. If the difference remains, then I don't know how to explain it, and I would need to use a profiler on your laptop to go to the bottom of it. Another thing that comes to mind here is that maybe the desktop PC is Intel and the laptop isn't. We only really test the performance on Intel processors. You can use the CPU-Z tool to dump your processor's characteristics to a file, and I can have a look if you want. Now, the GPU stuff. At time of writing, only PhysX 3.4. uses the GPU in PEEL. And this only happens if you click the "use GPU" checkbox in the PhysX plugin's UI. If you do that, the rigid body simulation will then run on the GPU. Things like articulations and scene queries (raycasts, etc) stay on the CPU (they haven't been ported to the GPU yet). The GPU code is written in CUDA, so it will only run on an Nvidia graphics card. If your "integrated chipset" is not Nvidia, there should be an error message in the DOS window telling you that the software reverted to the CPU pipeline. It should not affect performance compared to not selecting the "use GPU" checkbox but who knows, maybe there's a bug in that specific scenario. If you integrated chipset is an Nvidia card, and you do run the physics on the GPU there, it might be slow simply because the GPU is too old. There is a break-even point with the GPU stuff, beyond which the GPU codepath is faster than the CPU codepath. But it is not always the case. Simple scenes will typically run faster on the CPU no matter what. And for large scenes, "old" GPUs may also not run the simulation faster than the CPU. You need a heavy scene and a relatively recent GPU for the GPU codepath to be a gain. If needed, CPU-Z will tell you what exact GPU you have, and I can have a look. If you are running on the GPU, the performance might also depend on the driver version. Downloading the latest drivers might help here, if you're not up-to-date. That's about all I can think of, I hope it answers the questions a bit.
  21. Beware of "well-known" PhysX "facts" here again. Most of them are plain BS really. All these engines are based on the same "real" equations of motion. They just use different ways to solve them, with different trade-offs depending on the target audience and/or customer requests. There aren't more "magic numbers" in PhysX than in the others (unless you're talking about something specific that I am not aware of?). And I'd say they all use "approximations" as soon as they rely on iterative solvers that don't necessarily converge to the exact solution.
  22. FWIW, most of the "common knowledge" I see online about PhysX.... is wrong But you are right that none of these engines is "the best". There's performance, memory usage, stability, accuracy, customer support, supported platforms, the API, the documentation, the features, the community, the source code quality, the API stability, and dozens of other aspects, each of them with shades of grey anyway. These engines have pros & cons, strengths & weaknesses, etc. Anybody telling you "mine is the best" is either trying to sell you something, or a fanboy of one particular engine.
  23. I am not sure what you are referring to here. You can improve the stacking behavior in most engines by just increasing the number of solver iterations. Also, last time I tested it (at the time of the PEEL 1.1 release), Newton 3.14 had an issue with the "sleeping" algorithm (the thing that deactivates objects when they are not moving much). By default these mechanisms are disabled in PEEL, so that I can have a look at the real performance / stability of each engine. That is, nothing ever sleeps, nothing is ever deactivated (so that's not what you'd get in a real game). However the API for controlling this seemed broken in Newton 3.14, i.e. I couldn't deactivate sleeping anymore (compared to Newton 3.13 for example). That gave Newton 3.14 a bit of an unfair advantage in some scenes IIRC, since it was the only engine for which things went to sleep. Box stacks typically go to sleep immediately, so maybe it could have an effect here. In my experience all these engines handle stacking more or less equally well for the same CPU time budget
  24. Is the "soft-body chain" you mention the chain made of connected fixed-joint torus? It's not soft-body, it only looks soft because the engines use linear /approximate solvers that didn't converge to the exact solution. It's supposed to be all rigid in theory. In any case the behavior here will depend a lot on the number of iterations I previously mentioned. It is possible that Julio's video (which might be from 2013 if it's as old as John's email) used a different number of iterations compared to his latest build. Different Bullet versions also use different number of iterations by default (IIRC they moved from 5 to 10), and it will also ultimately depend on the number of iterations I used in the Bullet's setup code, in PEEL. Generally speaking I wouldn't trust a video showing something *failing*, more often than not you can make things work by just tweaking the parameters (as least as far as the PEEL scenes are concerned) or using the proper engine feature.
  25. Incidentally, I added a test just like that in PEEL 1.1 It's called "HighMassRatioCollisions", should be test #16 in the released build.
×
×
  • Create New...