Jump to content

nick.ace

Members
  • Posts

    647
  • Joined

  • Last visited

Posts posted by nick.ace

  1. No, there is no streaming. It's a feature I really want too. You can't implement this yourself because it requires specific OpenGL functions to accomplish. You will have to break up your maps, but you can keep a pool of objects in VRAM so you won't have to load from scratch each time you enter a new region, but anything new will have to be loaded, and anything not new will be unloaded.

  2. I agree with gamecreator. If there aren't going to be profiling tools (which there really should be), then there needs to be an extensive documentation for this. The only one who knows how the engine is designed in Josh, so he's the only person who can definitely say where the engine performs well and where it doesn't. I really shouldn't be expected to upload a project every time I encounter a performance issue.

    • Upvote 3
  3. And 10k is too much for characters 100m away, and 1k is way too much for character 1000m away. That game is one of the first on the PS4, came out three years ago, and uses forward rendering, and even then, the 3rd LOD level is 10k.

     

    Check out this paper:

    http://graphics.stanford.edu/papers/fragmerging/shade_sig10.pdf

     

    And this discussion (to help prove that I'm not making these numbers up):

    https://www.reddit.com/r/gamedev/comments/26fpq1/polycount_and_system_requirements/

     

    The paper refers to overdraw and a way to change GPUs to better render smaller triangles. This is relevant because the OPs problem isn't related just to vertices for the characters. In the same way 40k characters are inappropriate for certain distances, 10k characters are as well. You put stress on the rasterizer by rendering small triangles, and you also add more fragments to be computed.

     

    The point is that 13k is not high-end in today's games, and that 40-60k should be reasonable as long as you use LODs (which you should be using for 10k as well because of the overdraw problem).

  4. This is ridiculously awesome!!! It should help make AI soooo much easier to program and much more complex. If people start making AI functions and uploading it on the Workshop, I can't even imagine the possibilities! It's looking great so far! Really excited to see where this goes!!!

  5. Another way you could do it is to just subtract the positions and then calculate the new spot from there:

     

    dx = enemy.x - player.x
    dz = enemy.z - player.z
    normalized_dx = dx/sqrt(dx^2+dz^2)
    normalized_dx = dz/sqrt(dx^2+dz^2)
    final_x = normalized_dx * distance + player.x
    final_z = normalized_dz * distance + player.z
    

     

    And then move the enemy to that position. This way, you wouldn't have to deal with pivots.

    • Upvote 1
  6. Shadmar made most (if not all?) of the post-processing shaders in the pack on the Workshop, so he would know best. I think you just organize them by number (lowest number being the first, etc.). But Thirsty Panther is right about experimenting because I've sometimes felt that different orders worked better.

     

    A few tips though:

    -Use SSAO as one of the first shaders

    -Use fog after SSAO but before everything else

    -Use SSLR as one of the last shaders

    -Use color-based shaders last (i.e., grayscale)

    -Use depth of field last

     

    IDK where to put bloom and a few others though.

    • Upvote 3
  7. If you look at the slides, they say they draw over 11 million triangles regularly. This isn't the only game to do this either (I'm sure you can find more data by searching):

     

    http://kotaku.com/just-how-more-detailed-are-ps4-characters-over-ps3-char-507749539

     

    BTW, the PS4 graphics is roughly between the GTX 750 Ti and the GTX 760 in terms of floating point operations per second and cores, and there's no integrated GPU better than that (two newer GPUs by Intel that are their best GPUs compete with it though). Since the GTX 750 Ti is at the lower end (look at some of the Steam system requirements for some of the newer AAA non-VR games), I don't think 30k is unreasonable for characters, but LODs would certainly help.

  8. I have the GTX 750 Ti, and it was great for a while, but I'm now struggling to play some of the latest games on low settings. I think that might not be a bad low end (or even the 280X depending on when you plan to release) when you consider how the market will look in the future. It seems like almost half of GPUs on Steam have 2 GB VRAM or above, and I think the GTX 750 Ti in on the lower end of 2 GB cards, but I might be wrong.

     

    The good news is that GPU throughput should continue to increase rapidly, unlike the CPU market.

  9. You can just edit the vegetation shader. I'll update this answer once I find the right line.

     

    Edit: So what you have to do is this:

    1. Go to your materials for your vegetation, go to the shader tab, press the pencil button next to the "Vegetation" shader
    2. A screen should pop up, change the stage to "Vertex"
    3. There's a line about adjusting the scale. This scaling variable is what you need to change. You need to do this for all materials of your vegetation. You will see an equation that sets the scale
    4. Make two new float variables for your range (i.e. "float small", "float large")
    5. Replace the .x scaling range variable with "small" and the .y scaling range variable with "large"

     

    Now your vegetation will scale within those ranges.

    • Upvote 2
  10. The terrain sort of counts for occlusion. Whenever you have the terrain (or anything really) covering something else, some of those surfaces get removed earlier from the graphics card, so there's less to process. Hardware occlusion culling in my experience has limited effectiveness. Sometimes it works well, but somethings it doesn't. It just depends on the scene. Outdoor areas are probably the worst places for it though.

     

    For normal map generation, there are some tools that use retopology such as 3D-Coat, but they can be expensive. It's harder with animated meshes because you have to preserve edge-flow (basically how a triangle stretches during animation).

     

    Unfortunately, LOD isn't built in, so you'll need to write a script. The vegetation system has a billboard system, so you could possibly use that for some repeating objects or something.

  11. 13K is not much by today's standards; you can definitely push more. I've personally used higher res meshes than that with decent performance. One of the biggest things for me was the shadows for animated objects (much more than the mesh itself).

     

    This came out two years ago on the PS4 (not exactly top of the line PC) and it's characters were 120k. This shouldn't be an anomaly in today's games with today's GPUs:

    http://suckerpunch.playstation.com/images/stories/GDC14_infamous_second_son_engine_postmortem.pdf

     

    You definitely should use LODs though. The problem is that the fill rate gets way too high for dense objects (in terms of screen space occupation and number of triangles). Small triangles can cause rasterization performance problems because of the algorithms used to fill triangles (has to do with bounds checking and overdraw). The rasterizer on your graphics card is fixed unlike the processing cores. The other issue is that it can be parallelized, but if one part of the screen is much denser than another, you get issues with fragment discarding. IDK if that's a bottleneck, but it could be. You may want to try tessellation approaches since they work fairly well with characters (although it's kind of difficult to learn). With 4K resolution becoming the norm, 10k characters are going to stick out even more. TLDR; use tessellation and/or LODs for characters farther away.

     

    Another thing you should try to do is make the line of sight broken up more. You'll see that a lot of open-world designers do this to avoid problems with draw distances (and to help with streaming). This allows you to hide objects that are closer from the camera though.

  12. I think we need to see a little more of the project. It sounds like you are making an object in the editor and expecting it to be available in a local script. The scope of cube is outside of the ball script.

     

    There are a few ways you can do this. The most direct way would be to make a script for the cube and set it as a global variable. Then you can access cube like this. It matters where you call that distance code though, because if the cube global variable hasn't been set, then you will access an invalid object.

    • Upvote 1
  13. He may be using forward rendering for that part. This is actually the proper way to do transparency (or depth peeling), but it can get expensive, which is why the shaders for transparency take some shortcuts that wouldn't work well with water. I think a post-processing effect might be the best bet though.

  14. I don't really think that water looked that different except for the geometry changing, maybe it's just me. The problem is that you either have to do multiple render passes or use a post-processing effect to get true transparency for water.

  15. Lighting is different because you need a direction (unless you want ambient light). That is sort of how lighting already works. Lights have range and you use AABB intersections to apply the lighting to close objects. In an optimized setup for static scenes, you might even use KD-trees (although this allows for some movement). I mean it's an interesting idea, but lights also need to be physically accurate, and shaping the lighting in arbitrary ways breaks the physics of light. What you suggest is probably more appropriate for volumetric fog and environmental probes.

     

    Now for volumetric light, you can do this somewhat, but you have a light source still and the rays can't be interrupted. Then, you would just need to make a mesh around the light. I personally don't see why that Havok demo makes sense though. I guess if you really need a custom area to be lit, but that seems unrealistic to me.

  16. Sorry, I saw the S2 video and thought that's what you were getting at. Correct me if I'm wrong here (which I might very well be since I'm not an expert in shadow techniques), but there isn't a way to prematurely discard fragments like that because you won't know where the fragment is until after rasterization, which is far down the pipeline (for either shadow maps or shadow volumes). You still run through pretty much the same rendering pipeline with shadow maps as you do with rendering everything else, so there's not much you're going to be saving. You could possibly save some fragment calculations, but even that's not guaranteed because fragments are computed in warps (Nvidia) or wavefronts (AMD).

     

    Baking would be a cool feature, but that's an entirely different process. The environmental probe area suggestion seems interesting.

    • Upvote 1
  17. There are a bunch of commands, but they are part of the C standard, so they are not exclusive to shaders. I may make a tutorial on all of them because you can do some neat tricks with them, but there may already be a lot of material on them.

     

    https://en.wikipedia.org/wiki/Bitwise_operations_in_C

     

    The && is a boolean operator, and that makes a difference in many situations. You can abuse the bitwise operators if you are only dealing with booleans, but for every other data types you will get incorrect results for certain input.

×
×
  • Create New...