Jump to content

LordHippo

Members
  • Posts

    71
  • Joined

  • Last visited

Everything posted by LordHippo

  1. Crysis2 does it using a textured point light.
  2. Sorry. I didn't know that ' is for commenting in BMax. Yeah it's working, if the NULL entity is assumed correct
  3. It was the original question Aily. The entity is not being passed to the material draw callback. So any function on that entity will result in a crash. In C, the entity is always NULL.
  4. I was talking about MATERIALCALLBACK_DRAW. The wiki says: So I think if the entity is not passed to the callback, it would be useless. So I assume it as a bug. Another question: will the entity draw callback called exactly when the entity is being drawn? For example if I change shader uniforms in this callback, will these changes be applied to each entity, or the last change to all the entities?
  5. Somehow disagreed. That's known as "self occlusion" problem. Though they also didn't fixed it in Crysis2 (which is a deferred renderer). Cause they've mentioned that they kinda like this side effect and did nothing to fix it. Actually there are some SSAOs out there that don't use normals (just the depth buffer) and don't have this problem. Best examples are Battlefield3's SSAO (not HBAO), ToyStory3 (one of the best SSAOs I've ever seen!), Uncharted2 and lot more.
  6. Yeah. I meant "should NOT be dependent on the GPU".
  7. But the culling time should be dependent on the GPU, as far as occlusion culling is not used.
  8. I also have the same problem. I'm using this in C and actually the entity is always NULL. However the number of times this callback is called is correct. Anyone have ever used this callback?
  9. I've released my implementation of Crysis1 SSAO. http://www.leadwerks.com/werkspace/files/file/309-crysis1-ssao/ Hope you find it useful.
  10. I think Josh already answered it: BTW, is this high number of objects your requirement?
  11. Rick is right. You have to pass the extra parameter as string. So in case you want to send an int, you have cast it ti string by using itoa, and then pass the string to SendEntityMessage. Lua does these kind of conversions automatically, but in C you have to do them yourself.
  12. My bad, I meant "advice". I have a really bad headache and used a wrong word! I'll write the post as soon as I get better
  13. Thank you all for your support I've decided to follow my boss orders! The reason is that game development community is really small in Iran. So if I start a fight, and get a bad reputation, I will be in trouble someday. But I'll follow Dozz's instruction and write a blog post about this SSAO. And I think it would be really easy for Josh or someone with some shader knowledge, to write it himself.
  14. Hello everyone. Sorry for the delay To make it short, my plan was to improve and share the shader, along with many other things I've done in the past months. But our producer told me that I don't have the permission to release these shaders, paid or free. I tried to convince him that these are my own work, not the company's, and I can use them in any way I want. But I had no success I really don't like these kind of restrictions, as they only cause us the slow-down in the development when everybody should do everything him/herself. Maybe someday when I quit this company I can release them as a package for Leadwerks. But I promise you that I'll do my best to convince him to release these shaders to community. So hope me luck ps: I'm really sorry for this behavior. It's not my own preference. Hope you understand.
  15. The main problem is that I'm not an artist, and can't make a good use of the shader. Also I'm not allowed to publish any shots from the game we're working on.
  16. As i said, the difference in the cave shot is hard to notice. Cause the model has AO baked in its textures.
  17. So I've tried my best to turn everything enabled. Notice that the effect is not noticeable side by side. The best way is to have a Photoshop layered image and switch them on and off. Also the effect of SSAO is not noticeable, because the diffuse texture of the models have baked AO.
  18. Which model do you mean? I've already posted that comparison on the cave model.
  19. Yeah, mathematically it's not a normal. But here in the picture you can see errors for different lengths as blue lines. So as you can see longer is not more accurate actually. The red arrow shows the best length, and the minimal error.
  20. No, it's not necessary. It can be normalized in every shader that reads normal from gBuffer. I think most of the LE's shaders does this currently.
  21. Hello everyone, I've implemented a really handy method for storing normal values in the gBuffer developed by Crytek. The main idea is simple. The default way of storing normalized normals in the gBuffer, only uses some parts of RGB values. As you know there are 256*256*256 values for a RGB value (with 8 bit buffers). So mathematically there are 16,777,216 values. But when using normalized values, only 289,880 values are used. So only %1.7 of values are used As you can see in the top image there are some artifacts in the reflected image. This is not beacause 8-bit colors are not sufficient for storing normals, it's because we're wasting it completely The simple way is to use 16 bits for normal buffer. But it would use twice the memory, and is slower. This method tries to fit the normals in 8 bits with the quality of 16-bit normal buffer . It tries to find the best length for a given normal direction, that when scaled and stored in 8 bits, are the closest possible to direction of original normal. The length scales are pre-calculated and saved in a cubemap. So when storing normals in the gBuffer, all you have to do is to lookup the normal direction in the cubemap, and scale the normal by the value read from the cubemap. So as you can see the second is much better and almost has no artifacts. Then the cubemap lookup is turned into a 2D texture lookup that has only one channel (TEXTURE_ALPHA8) which is really fast. In my framework I load and bound the lookup texture in the beginning of gBuffer generation, so the bandwidth is saved. So by using this method GPU memory, bandwidth, and time could be saved with NO drawbacks B) Note: The texture attached is provided by Crytek. NormalsFittingTexture_2.rar
  22. Thank you so much for the model. It really helped me a lot in development
  23. Thanks Josh. No, I'm just measuring the frame time with and without the whole SSAO rendering (copying buffers, rendering SSAO, blurring it) and subtract them. Can you tell me more about "GPU performance query"? I've googled it and found no results for OpenGL. But this time was the same in any scene I've tested, so I think it's accurate enough. Correct me if I'm wrong
  24. Thanks Scott. I've never heard of "injectors" before, and have no idea how to make them. But it would be interesting to make them, and test my shaders in AAA games! I will test the quarter res depth buffer, but I think there will be some halos around the objects. Currently running SSAO with half-res depth and in %75 resolution takes about 1.5ms in 720p on geforce GTS450 which is really good. About the AA method (SMAA), I think the performance is not good. It runs on GTX295 (which is really powerful) in 1.8ms with 4x memory footprint. So in my opinion, MLAA is the best post AA technique for the current generation of hardwares. I'll try to implement all the AA methods such as MLAA and SMAA and compare them in terms of quality, performance and memory footprint.
  25. Hi all, Here is my SSAO implementation. I've got some inspirations from HBAO, but it is a completely different method and is MUCH more optimized. I've also implemented Crysis 1 SSAO to be able to compare. The time values you see on the image is only SSAO render time. Screen resolution is 1280*720, and the SSAO is rendered in full resolution on a GeForce GTS450 card. ( FPS = 1000 / time ) Model from Ywa by Harry ( http://harrysite.net/work/index.php?x=browse ) I've also implemented a method for false occlusion removal: It's a combination of a method implemented by Crytek in Crysis1, and my own method. It still needs more optimizations. Another improvement I've made is using a half resolution depth buffer for the SSAO rendering. The method is used in Uncharted2. As you can see on the picture, this method almost doubles the speed of SSAO, with almost no visual artifacts. From a technical point of view, the main bottleneck of the SSAO algorithms is their texture fetches. More specifically, they heavily do GPU cache trashing, because of their random sampling basis. Also cache trashing decreases shader performance dramatically. So any method that reduces cache trashing would boost up the SSAO. By using a half resolution depth buffer, one of the main benefits is that texture sampling points get closer, so the amount of cache trashing is reduced. Also as seen in the results, there is no noticeable difference or artifacts. So with this technique, you gain much more performance, with no visual cost! That's the magic of OPTIMIZATION I will share it to the community when I'm done with false occlusion removal optimizations. Also check the album page for more shots. BTW, I'll be happy if anyone have any visual or technical suggestions, cause I want to "get rid of it, once and for all"
×
×
  • Create New...