Jump to content

nick.ace

Members
  • Posts

    647
  • Joined

  • Last visited

Everything posted by nick.ace

  1. nick.ace

    Status Report

    The asset manager feature is a great idea! Good luck at the event!
  2. The terrain sort of counts for occlusion. Whenever you have the terrain (or anything really) covering something else, some of those surfaces get removed earlier from the graphics card, so there's less to process. Hardware occlusion culling in my experience has limited effectiveness. Sometimes it works well, but somethings it doesn't. It just depends on the scene. Outdoor areas are probably the worst places for it though. For normal map generation, there are some tools that use retopology such as 3D-Coat, but they can be expensive. It's harder with animated meshes because you have to preserve edge-flow (basically how a triangle stretches during animation). Unfortunately, LOD isn't built in, so you'll need to write a script. The vegetation system has a billboard system, so you could possibly use that for some repeating objects or something.
  3. 13K is not much by today's standards; you can definitely push more. I've personally used higher res meshes than that with decent performance. One of the biggest things for me was the shadows for animated objects (much more than the mesh itself). This came out two years ago on the PS4 (not exactly top of the line PC) and it's characters were 120k. This shouldn't be an anomaly in today's games with today's GPUs: http://suckerpunch.playstation.com/images/stories/GDC14_infamous_second_son_engine_postmortem.pdf You definitely should use LODs though. The problem is that the fill rate gets way too high for dense objects (in terms of screen space occupation and number of triangles). Small triangles can cause rasterization performance problems because of the algorithms used to fill triangles (has to do with bounds checking and overdraw). The rasterizer on your graphics card is fixed unlike the processing cores. The other issue is that it can be parallelized, but if one part of the screen is much denser than another, you get issues with fragment discarding. IDK if that's a bottleneck, but it could be. You may want to try tessellation approaches since they work fairly well with characters (although it's kind of difficult to learn). With 4K resolution becoming the norm, 10k characters are going to stick out even more. TLDR; use tessellation and/or LODs for characters farther away. Another thing you should try to do is make the line of sight broken up more. You'll see that a lot of open-world designers do this to avoid problems with draw distances (and to help with streaming). This allows you to hide objects that are closer from the camera though.
  4. I think we need to see a little more of the project. It sounds like you are making an object in the editor and expecting it to be available in a local script. The scope of cube is outside of the ball script. There are a few ways you can do this. The most direct way would be to make a script for the cube and set it as a global variable. Then you can access cube like this. It matters where you call that distance code though, because if the cube global variable hasn't been set, then you will access an invalid object.
  5. If you don't use C++, you need to make a pivot and attach a spawning script to it. In your spawning script, use commands such as: http://www.leadwerks.com/werkspace/page/api-reference/_/prefab/prefabload-r622 You need to make monster prefabs though for this to work.
  6. nick.ace

    Water

    He may be using forward rendering for that part. This is actually the proper way to do transparency (or depth peeling), but it can get expensive, which is why the shaders for transparency take some shortcuts that wouldn't work well with water. I think a post-processing effect might be the best bet though.
  7. nick.ace

    Water

    I don't really think that water looked that different except for the geometry changing, maybe it's just me. The problem is that you either have to do multiple render passes or use a post-processing effect to get true transparency for water.
  8. Lighting is different because you need a direction (unless you want ambient light). That is sort of how lighting already works. Lights have range and you use AABB intersections to apply the lighting to close objects. In an optimized setup for static scenes, you might even use KD-trees (although this allows for some movement). I mean it's an interesting idea, but lights also need to be physically accurate, and shaping the lighting in arbitrary ways breaks the physics of light. What you suggest is probably more appropriate for volumetric fog and environmental probes. Now for volumetric light, you can do this somewhat, but you have a light source still and the rays can't be interrupted. Then, you would just need to make a mesh around the light. I personally don't see why that Havok demo makes sense though. I guess if you really need a custom area to be lit, but that seems unrealistic to me.
  9. Take a look at the non-animated transparent shader. In the fragment shader part, you will see a like that looks like this: if (icoord.x%2 == icoord.y%2) discard; This line will make the mesh transparent. I think I saw some randomization factor to allow for overlapping transparent surfaces, but I don't remember it, and I don't think it was enabled anyway.
  10. Sorry, I saw the S2 video and thought that's what you were getting at. Correct me if I'm wrong here (which I might very well be since I'm not an expert in shadow techniques), but there isn't a way to prematurely discard fragments like that because you won't know where the fragment is until after rasterization, which is far down the pipeline (for either shadow maps or shadow volumes). You still run through pretty much the same rendering pipeline with shadow maps as you do with rendering everything else, so there's not much you're going to be saving. You could possibly save some fragment calculations, but even that's not guaranteed because fragments are computed in warps (Nvidia) or wavefronts (AMD). Baking would be a cool feature, but that's an entirely different process. The environmental probe area suggestion seems interesting.
  11. There are a bunch of commands, but they are part of the C standard, so they are not exclusive to shaders. I may make a tutorial on all of them because you can do some neat tricks with them, but there may already be a lot of material on them. https://en.wikipedia.org/wiki/Bitwise_operations_in_C The && is a boolean operator, and that makes a difference in many situations. You can abuse the bitwise operators if you are only dealing with booleans, but for every other data types you will get incorrect results for certain input.
  12. Why can't you just parent a plane to a spotlight? It would give the same effect unless they are doing something special that I don't know about.
  13. Yes, you can do this in the vertex shader of the vegetation shader. Just scale each vertex position by a certain amount before any transformations.
  14. The interior lighting and design looks great! If it becomes prohibitive, keep a pooling system and only unload mesh and material objects that you don't need to reuse. Of course, then you would need to implement your own map loading logic and only have one map since the map file isn't documented. That kind of seems like a lot of work. I think a streaming system would be beneficial here so that you could load stuff as you get close.
  15. Well && isn't a bit operation, but were you setting a variable with &? I wish that the code for the model shaders used bitwise operations instead of addition because I think it would make more sense. What I was saying about subtracting was mainly if the total of the flags was less than ten. For example (using 8 bits instead of 32-bit int): 2 - 10 = -8 0000010 - 00001010 = 11111000 Now, you just checked a bunch of flags because the lighting shader uses bitwise operations to check for flags rather than subtraction. So far, there isn't a wiki page on it, but I can make one. Also, if you or anyone else wants me to make tutorials, I can do that. IDK how useful people find them. They do take time to make, but I like doing it, just not sure how good they are because I haven't gotten a lot of feedback. Edit: Just created a wiki page: http://leadwerks.wikidot.com/wiki:shader-specification
  16. Use the normal recalculation option in the model editor and select angular threshold lighting option. (Tools->Calculate Normals). I find that this makes objects with sharp faces look more realistic. http://www.leadwerks.com/werkspace/page/tutorials/_/models-and-animation-r8#section4
  17. Ok, my bad. The material flag has to contain a 2 to be selected. Don't subtract 10 though. You should be using bit operations or else you can go negative and cause a bunch of flags to be checked by accident. You can just set the alpha channel to 0, but I would just see what the example shaders do since they include a bunch of other useful flags. Don't comment it out though because then the lighting gets screwed up because the normals are needed for good lighting.
  18. Yeah, no problem! I don't actually show how to alter the lighting shader in a tutorial, but I mentioned it in a video because it took me forever to figure out why everything was red when I was overriding Leadwerks lighting lol. That's a very Leadwerks-specific thing though.
  19. Oh I see, my bad. I thought you meant that Leadwerks would have a few compiled versions and that you could set up your program using CMake.
  20. Why? CMake just sets up projects, it doesn't compile anything. Just link the Leadwerks dll's and you should be all set. http://stackoverflow.com/questions/17225121/how-to-use-external-dlls-in-cmake-project
  21. fragData1 is the normal buffer. The alpha channel for the normal buffer is used to store flags such as selection state and decal stuff. Are there any other fragData1's being set? I thought that by default if you didn't write to it, the unselected state flag wasn't set (so it would be selected) because in my lighting tutorial I remember having to modify the lighting shader in order to prevent this from happening. Maybe that changed though.
  22. Without looking at the GLSL code, it's probably not setting the material flag correctly. Look at the examples for model shaders. Without this flag explicitly set, the lighting shader will by default assume that it's "selected." This is actually how objects are colored when you select them in the editor.
  23. I've used both, and I honestly don't really prefer one over another too much, but gcc imo is easier to use. Porting code might be an issue if a developer tries to support both Linux and Windows. But from a quick search: http://stackoverflow.com/questions/31529327/c-is-it-worth-using-gcc-over-msvc-on-windows http://stackoverflow.com/questions/21134279/difference-in-performance-between-msvc-and-gcc-for-highly-optimized-matrix-multp http://stackoverflow.com/questions/8029092/gcc-worth-using-on-windows-to-replace-msvc
  24. Yes, that's the only way for now.
×
×
  • Create New...