Jump to content

nick.ace

Members
  • Posts

    647
  • Joined

  • Last visited

Everything posted by nick.ace

  1. The probes aren't dynamic, so you won't be able to see the player. The SSLR shader won't work because the back of the player will be culled in a third person game and hidden in a first-person game. Basically, you can't use that shader for anything that's not currently displayed on camera.
  2. It's because of the problem in this tutorial: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/ You add a bias to avoid shadow acne, but the trade off it that there's nothing you can really do about it other than increase lighting quality like gamecreator said.
  3. nick.ace

    Idea Fell Apart

    Each line segment is made up of two vertices (pixels) right now, so you have 8 vertices. If you share the vertices between line segments, then you will never have gaps. You need to treat the vertex positions (such as (0,9)) as continuous values rather than discrete values. One way to simplify this is to offset the coordinates by (.5,.5), as this is what GPUs have done in the past. The blue rectangle are the offset coordinates. The yellow fills are to indicate that that pixel is filled by the top/left. The red indicates the bottom/right. It's not super important for 3D, but for 2D UI it makes more sense. Rules: We will take the floor of the filled in pixel value except at the end of the line segment. If the line segment is the bottom or the right, we will take the ceiling. It's mathematically impossible for gaps or overlaps to form with this set of rules. For rasterizing 3D triangles, barycentric coordinates are used in order to generalize these rules, but the same idea applies. Either way, you get crisp, accurate edges.
  4. nick.ace

    Idea Fell Apart

    If you choose to go the non-global route, why not just use an abstract coordinate shared vertex between lines segments? Then you would only have 4 vertices instead of 8, and this issue would be avoided. Then you just offset the top/left by one or the bottom/right by one.
  5. Yeah, your method is how I would do it as well. The double rendering is too expensive, and you can customize object appearances better. Generally, you want things on your minimap to be highlighted (enemies, objects, players, etc.), which double rendering wouldn't be great at. I don't think I've ever seen an actual minimap done with the render to texture method, but I could be wrong. On thing you may want to think about though is avoiding too many draw calls to images though. I don't imagine it being an issue, but if you have hundreds or thousands of images in your minimap, you're going to be doing a ton of overdraw, and have performance issues there as well, but I would think that it's better than double rendering still. In those extreme cases, you would probably use a 2D array with some type of custom shader to do that, but for most games I don't see that being an issue.
  6. nick.ace

    Idea Fell Apart

    What does the math look like? GPU rasterizers implement techniques to specifically avoid this type of problem since you would otherwise end up seeing gaps between triangles through the same types of scaling.
  7. You don't actually hide the entity. You just use a garbage shader that tosses out the fragments on the model, but use a legit shader for the shadows in the main world. In the first-person model world, you would just render the mesh with shadows again (or without, but you may need self-shadowing). You would have to render the model twice (one in each world) but it shouldn't be a huge deal.
  8. Yes, Examples: - Traffic tools - UI builder tools - AI waypoint tools - AI behavior trees - Animation blending editor - Road tools - City generation - LOD tools - New importers/exporters - Terrain generation - Mesh editors - etc. Some of these were an issue for me specifically. Unless you want to use pivots for all of those. I couldn't even use my own traffic system at a certain point once I had 500 pivots lying around and have to connect them all through the script parameters! Besides, if you depend on a plugin, you know the risks. Also, right now you have a bunch of UI libraries because there's not a great single way to do it (let me clarify: they're all great, it's just they lack an interface in the editor, so some creative was involved). Wouldn't it be cool if all of those libraries were consolidated? The thing is that the map file format doesn't need to be different. The biggest problem is the interface for the things I listed. That's why Workshop code could exist, so you can fix it if anything needs to be changed. Or perhaps have a separate repository for all plugins so that anyone can request pull requests. Not every tool needs to be complicated that it would break either. Why would you have to maintain your section of MP3 code? That format doesn't change, so the decoder doesn't change. The playback shouldn't change either.
  9. Why don't you just render the first-person model with a material that has shadows but discards the fragments of the model in the normal shader? Then, you get the best of both worlds. Personally, I would do what Josh suggested though. If you decide to use environmental probes, you're going to be in trouble.
  10. I would have written my Trafficwerks code as an extension if it was supported. Right now, the only thing you can do to mimic extensions is to use pivots (e.g., Aggror's GUI tool). I mean people upload to the Workshop for free all the time, and there are many downloads there. Plus you can make some cool tech demos with extensions.
  11. Ah ok, I didn't know there were no hooks or anything for leaving an entity. Why not program a box intersection test yourself though? Then you'd have as much control as you would need. You would need to scan through all entities at startup and then calculate the highest +x, -x, +y -y, +z, and -z values for the vertices of a mesh. You can remove elements from a vector. You should just swap the element you want to remove with the last element and use pop_back(). Don't use the erase() method though. A vector is just an array behind the scenes whereas a list is not.
  12. Isn't the AABB box supposed to not be used for collisions? There's nothing really you can do about that. Personally, I would just use collisions for triggers. Otherwise, use some quick rectangular prism intersection test. In terms of the loop efficiency, it's hard to determine whether that would be a good setup or not depending on the application. But there are probably better techniques to use. You could use a KD-tree so that you don't need to test every entity. IDK how costly it is to set up each frame compared to your list approach though. I wouldn't use the "list" data structure. Use "vector" instead if you are going to use a list like you are using. With "list" you are going to be building a chain of pointers, so you will have fragmented memory and get more cache misses, so your loop will be less efficient. Also, I would avoid using GetDistance() and just square the second term. Implement your own distance function without the use of square-root.
  13. I think he saw the Site license as being the only one without any restrictions. I'm kind of curious though what the different with the commercial license and the PC license is. Is it just support and extra platforms?
  14. Yes, but you would need to use the C++ edition and implement the networking yourself. A popular library people here use is RakNet. Apparently, it was acquired recently by Oculus (so I guess Facebook?).
  15. I think Roland's referring to the pseudo-PBR stuff that's been going on recently with Leadwerks. I don't have Mari, but try BDRF. I think that's the lighting model that PBR uses.
  16. Is that a resolution supported by your video card/monitor? Use these commands: System::CountGraphicsModes System::GetGraphicsMode Basically, the first one gives you a length. You would then use a for-loop or something and loop through and print out all of the graphics modes supported.
  17. No, it won't. In fact, it won't be distorted with different aspect ratios either. The only thing you have to worry about it the UI if you make one and don't apply scaling.
  18. Yeah sorry, on second thought your example wasn't the best example. I was more or less thinking of just silhouettes, and with another buffer you could modify SSAO to ignore changes in depth. Anyway, the heat haze example would be more appropriate. Yes, that's what I want.
  19. Yeah, but that's slow. You have to do up to twice the transformations on the CPU. You also have to send up to twice as much data from the CPU to the GPU. Also, you don't get to take advantage of the GPU discarding nearby fragments, and you still end up using two buffers (actually, probably all of the buffers because you have to render the second world). If lights are in the scene, you would have to rerender all of the lights twice. Basically, it's not fast, which is why if you have transparency has a high performance hit for deferred renderers.
  20. I think the material flags are in the normal buffer if I'm not mistaken, so you can still write a shader to handle a roughness map. It would still be nice to have it built in though.
  21. Yeah, but those are made after everything rendered (during postprocessing). I want to write to custom buffers using the fragment shader of the models. So for heat haze for example, you would want it to be occluded since it's a depth-based effect. The fragments that are visible at the end of this should have a special effect applied to them. Currently you can kind of do this by overloading the alpha channel of the normal buffer if you're careful (because it's used for other things a well such as decals and selection state for the editor) So, being able to write to FragData3, FragData4, ... Another example is your cool effect: You have to do two passes right? Well, if you could write to an x-ray buffer instead, you wouldn't have to do this. You would definitely need another buffer for this to do in one pass because the occluded fragments get discarded by the GPU (unless you mess with the depth buffer, but then that messes up a ton of stuff).
  22. Would it be possible for developers to specify additional custom buffers? It would make effects like heat haze and x-ray vision easy to do.
  23. One thing I really like about Leadwerks materials is how clean and direct the material editor is. Keep up the good work!
  24. Suggestions: You definitely NEED SSAO. Really it's a must for any indoor environment (but is great outdoors too). Add decals all over the place. Otherwise, the environment won't have character. Add bloom to make the lights shine more. But don't turn this too high. Like other said, make the ambient light darker. But also, I would avoid making it a grayscale color such as (15,15,15), but that's probably just my personal taste. Add particle effects unless it's supposed to be super clean. (Optional) DOF. I love it, but it's technically not realistic. It's more of cinematic effect really.
×
×
  • Create New...