Jump to content

nick.ace

Members
  • Posts

    647
  • Joined

  • Last visited

Posts posted by nick.ace

  1. @gamecreator It looks noisy because it uses a massive amount of rays for the raycasts. People often use cone-tracing with a mipmap chain instead of scattering rays because of the cache trashing and ridiculous amounts of memory lookups. "GPU Pro 5" has a nice chapter about this called "Hi-Z Screen-Space Cone-Traced Reflections."

    Josh's post is a pretty interesting demo though.

  2. @Josh Yeah, I know, but I meant that the OP could place instances (models) by himself for destructible vegetation.

    @tournamentdan I don't remember seeing any geometry shaders for the vegetation last time I checked. I doubt that would be a good idea though. I think I remember Josh trying that, and he said something about it not having great performance.

  3. You'll need to modify that script and have them not chase the player. There should be a "GoToPoint()" or "Follow()" command within that script. You need to change that to a hiding spot.

    5 minutes ago, karmacomposer said:

    I was able to lower the amount of hurt to 1 and I assume a value of 0 will not hurt the player at all - but the rats (and spiders) just stop after - never to attack again.  I want them to stop for a moment or two and then come after you if you are close enough for them to 'see' you.

    I thought that you wanted them to move away from the player (according to your original post)?

  4. The map size isn't the only factor here. Many linear games use streaming, and those map sizes aren't huge. I've had issues in the past with performance after filling out a 2k map with a decent amount of unique entities. Of course, the terrain itself plays a role, and I've seen a few topics complaining about memory issues with 4k maps. Unless those get solved, having maps that are 16k, 32k, etc., won't make a difference in increasing the number of open-world/larger games in Leadwerks.

    It largely depends on how many unique objects you have close to each other, and that includes the terrain since the terrain is a totally unique object (you can't "instance" tiles of the terrain). It also helps with loading screens (so you can do while loading your game, such as play a startup animation or allow the player to play a mini-game or something).

  5. You have two practical options IMO:

    Option 1

    Use one character controllers per group of rats.

    Option 2

    Create you own pathfinding system just for rats. Obviously, this requires some setup, but depending on what you want to accomplish here, this might be an option.

     

    With option 1, you would basically add a bunch of rats to a character controller. If you look at the AI script that comes with Leadwerks, you should see that it follows the player.

    Instead, you will need to set up a bunch of points that will be "hiding" spots for the rats. When the player gets close enough to the rats ("GetDistance()"), then need to use "GoToPoint()" to go to one of those hiding spots. Which hiding spot you go to depends on how you want the AI to behave (is the closest point OK, or do the rats need to get as far as possible from the player?).

  6. Destructible vegetation is a very complex topic. I doubt the vegetation system as is can support it because it uses transform feedback to do GPU culling, so if vegetation could essentially move in any way, then it vastly complicates how this is handled (maybe even impossible in some cases).

    That being said, you could place objects and take advantage of instancing (which is automatic in Leadwerks) to make destructible vegetation. You would need to place vegetation by hand. What did you have in mind exactly?

  7. Leadwerks allows up to 16 (1-layer) textures to be bound to a material. You can bind your own OpenGL texture array to one of these units, giving you at least 256 texture (I think that's the minimum in the specification). They all must be the same resolution, have the the same number of mipmap levels, and be the same format though.

    • Like 1
  8. @Rick This won't help much with performance (and can actually worsen performance). You aren't saving CPU processing due to draw calls/state changes being made, and you still have to go through vertex shading, tessellation, geometry shading (unless you discard here, but you get geometry shader overhead), and rasterization. If you discard in the fragment shader (because the terrain quads are too large or something), then you compromise early-z discard, which is a GPU hardware optimization that prevents fragment shaders from running. In fact, if you are rendering terrain and drawing it last, it can be faster allowing it to render normally than trying to discard it in a shader.

    • Upvote 1
  9. I'm not sure if you got anyone to work on this, but I would suggest replacing one of Leadwerks texture unit bindings with your own bindings. This way you can support texture arrays, which allow you to have 256+ textures for a single texture unit. This way, you also get sampling modes and mipmapping to work correctly (which can't get with a texture atlas).

    • Like 1
  10. I switched the two around:

    tangent = normalize(vec3(2.0, center_r - center_l, 2.0))
    bitangent = normalize(vec3(2.0, center_d - center_u, 2.0))
    normal = cross(tangent, bitangent)
    

     

    Also, I put 2.0 for the distance since I wasn't sure what scale your heightmap was at.

  11. There's an error in the geometry shader where you define N. You should make the last component 0.0 instead of 1.0 because you mess up the homogeneous component (which causes the projection to be off). I didn't have any issues with moving the camera around, just the mesh, but changing the homogeneous coordinate fixes that.

     

    Just FYI, you should be careful with outputting more primitives than you import. GPUs have internal buffers to store geometry shader results, but these tend to fill up pretty easily especially with higher primitive outputs (hence why tessellation shaders became a thing). So you may experience performance issues.

    • Upvote 1
  12. Why can't you read in the emission texture and do blending manually?

     

    Actually, why do you even have an emission color at all? It generally doesn't make sense to split it apart since materials generally should have the same emission and diffuse color. I haven't seen other deferred renderers do this. You would also save 3 channels here, and two of them can then be used for a velocity buffer for motion blur, and you would have one last value for something else (perhaps emission power?). Or better yet, swap the emission power and the roughness since roughness probably has a larger impact on shading quality.

  13. I like Marty's suggestion if you're on a budget, but AMD products do tend to be less energy efficient if that's a concern, but APUs should be more energy efficient. I would upgrade your CPU if you can though. Aside from the low clock speed (2.5 GHz) and being dual cores, your current CPU has a much higher transistor size than current CPUs. This is a good, but very technical answer on why that matters:

    http://superuser.com/questions/808776/whats-the-difference-between-mobile-and-desktop-processors

     

    Also, keep in mind that modern CPUs are very complex, and the architectural changes in each manufacturing generation can be very large.

     

    The only suggestion I have is to get an Nvidia card over AMD because of the issues AMD users have had for what seems like forever now.

     

    I disagree with this. If AMD drivers are problematic, then wouldn't it be better to develop your game on that by that logic so that you can find bugs before your users do?

  14. I doubt Shadowplay records at a variable framerate, so it probably drops to a lower framerate, so if you're getting around 45 FPS on the other thread you commented on, it samples at 30, and you don't get the stuttering you would normally get with V-sync enabled. Turn off V-Sync if you don't want that stuttering.

  15. Actually, volumetric shadows (or lighting) does something like this by deforming meshes using vertex shaders. However, that's way too complicated for this.

     

    You'll want to avoid casting rays from the player. Evenly spaced angles will generate too many rays for regions where there's no target. Do the inverse instead. You can make some sort of box around the targets, and then pick from various intervals between the top and the bottom of the box. This will give you a decent estimate. Otherwise, there's an infinite number of rays you would need to check.

     

    Pick doesn't stop at the first object. Internally it generates a list of objects since it is calculated out of order. You'll have to use multiple picks no matter what.

  16. I didn't realize the end of the hall was a wall. You have to make all of the walls have a certain amount of thickness. Overlapping walls won't fix it unless the walls themselves have thickness.

    • Upvote 1
  17. ^lol thought that comment was talking about the edge in the picture XD

     

    Have you tried increasing the lighting quality even more? Is it only when you move away from the edge or when you're anywhere? Also, do you have a directional light as well?

    • Upvote 1
  18. You can't really start loading assets when you get within a range (that's how streaming is usually used) without causing some sort of noticable lag, but unloading assets should be pretty much instantaneous. There's a command to load a world, but it doesn't work in the background. That being said, one user got it somewhat working, but I haven't really heard much about it. Either way, it's impossible right now to send the models to the GPU in the background (streaming).

×
×
  • Create New...