Jump to content

Rastar

Members
  • Posts

    421
  • Joined

  • Last visited

Everything posted by Rastar

  1. What are the exact steps to manually set an AABB? Since my mesh patches are displaced in the shaders, the automtically calculated ones will be wrong. I was thinking along these lines: local aabb = mesh:GetAABB() aabb.min = Vec3(minx,miny,minz) aabb.max = Vec3(maxx,maxy,maxz) aabb.Update() Would that work? Do I have to call any of those entity:UpdateAABB() methods afterwards or will this destroy my manual setting?
  2. Yes, and for seeing procedurally generated stuff in the editor. I am creating meshes in scripts, and 1) I see the result only when running the game, and 2) only see it shaded, but I would prefer wireframe mode to see if the structure is correct. There are also many applications I'm thinking of where I would like to be able to draw a path in the editor using some gizmos, and then create something along that path, e.g. extruded geometry (roads), or make something follow that path (e.g. a camera).
  3. If I check the "Normal map"flag in the texture editor - is the normal map then encoded in some way? For example, like the xy components stored in two channels (green/alpha, maybe) and the third has to be extracted using sqrt(1-g*g - a*a) ?
  4. It has been mentioned before, but better twice than never : It would be fantastic if scripts could be executed in the editor. A simple annotation like "RunInEditor" or so would do. That would open up a plethora of use cases. In addition, the screenshot rendering would take into account those scripts (right now it seems to be a copy of the editor views).
  5. I am scaling planar (ie y=0) mesh patches for a simple terrain using heightmap displacement in the vertex shader. Is there a performance difference between uniform scaling mesh:SetScale(scale, scale, scale) and non-uniform one mesh:SetScale(scale, 1, scale) ? I remember reading somewhere (non Leadwerks- related) that uniform scaling would perform better, but can't actually see why.
  6. Hi shadmar, thanks for the link. I've actually stumbled acros that site a couple of weeks ago - pretty niice stuff! Is it really WebGL, though? I thought that is comparable in functionality to OpenGL ES 2? The shader-based terrain generator is pretty interesting as well.
  7. Yes, the terrain height will be set in the shader (in the final version in the tessellation evaluation shader). I've stumbled across a blog where the author is basically sneering at all the geo clipmap implementations doing vertex displacement every frame in the shaders rather than deforming the terrain once on the CPU (see http://nolimitsdesigns.com/tag/terrain-rendering/). This was confusing for me - if you displace the patches on the CPU, aren't they then different vertex buffers? Anyways, I'll try the shader displacement approach first.
  8. Thanks, but don't cheer too soon - I will most certainly need help with a couple of shaders... :-)
  9. Leadwerks has a very nice terrain system that allows terrains of up to 4096x4096 units (in version 3.1, 3.0 goes up to 1024x1024) that performs very well and has a pretty unique texturing method. For most game applications, this is more than enough - filling such an area with objects and interesting gameplay is already quite a challenge. But some game ideas still require larger terrains. In my case, I am pondering something similar to a racing game, taking place in real-world areas, where a single level features tracks of between 50 and 200km. My testbed is the Mediterranean Island of Mallorca, which is about 80x80 km in size. The image above shows its general topology, with its digital elevation data loaded into World Machine, a very nice terrain generator (more on this in another post). To render something like this with sufficient close-view detail, but still provide panoramic views (e.g. down a mountain pass) at acceptable frame rates is quite a challenge. Fortunately, there are a couple of algorithms that have been developed for just that. I'll list a few that I know of and of course would be interested to hear of others. Chunked LOD Pretty old but still in heavy use is the Chunked LOD algorithm of T. Ulrich (see http://tulrich.com/geekstuff/chunklod.html, where you can also find C++ sources of an implementation). In its typical incarnation it works off a heightmap and generates a quadtree of meshes for it, based on a simple level of detail calculation. So, at the root of this tree is a very coarse representation of the complete terrain, which is divided into 4 child nodes of the same basic patch size (e.g. 256x256) and correspondingly higher level of detail. Every of those child patches has four children with... and so on. These patches are created during design time and are tessellated according to the underlying terrain structure - fewer vertices for flat planes, more for rugged mountains. During run-time, the Chunked LOD algorithm selects levels from the quad-tree based on a simple screen space error calculation. This will lead to patches of different detail levels lying next to each other and correspondingly to T-junctions - vertices of one patch lying on an edge of the next level. The resulting visible cracks in the surface have to be covered, usually by adding a small, angled skirt around every patch. In addition, to avoid a sudden popping of vertices when changing for lower or higher detailed patches, vertices are slowly "morphed" to their final position. The Chunked LOD method, though pretty old, still has a few advantages: the meshes are optimally tessellated, so you don't create vertices where you don't need them (flat areas) it works on legacy hardware, even mobile devices (though it's of course questionable if rendering such a large terrain on a phone really makes sense). Most other algorithms make use of displacing a heightmap in the vertex shader, and the corresponding GLSL functions (texture or textureLod) aren't available in the vertex shader for lower-end devices. But, as always, there are also many significant disadvantages: the preprocessing step (creating the mesh quad-tree) is quite time-consuming a lot of data has to be stored on disk (all three coordinates of the vertices plus additional morph data for the morphing algorithm) all those meshes have to be transferred from the CPU to the GPU Geo Clipmapping In the set of algorithms using heightmap displacement in the vertex shader, the geometry clipmaps approach of Losasso and Hoppe is a particularly interesting one (see http://research.microsoft.com/en-us/um/people/hoppe/geomclipmap.pdf or their GPU Gems 2 article http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter02.html). They construct a set of base mesh patches arranged in concentric rings around the viewer, with higher vertex density close to the viewer and fewer vertices farther away. Those base patches are reused, being only scaled and moved, so very few vertex buffers actually have to be sent to the GPU. Together with an algorithm for updating the heightmap being used for vertex displacement (thus taking care of viewer movement), this is a very efficient rendering method since it uses only a few static vertex and index buffers and most work is done on the GPU. Among the set of six different patch types being used there is also a strip of degenerate triangles (ie triangles whose three vertices lie on a line). Their job is to cover T-junctions between adjacent rings. Remaining discontinuities in geometry (and texturing) are covered in the shaders by smoothly blending between values in a transition region. In their paper, the authors also describe an interesting method for efficiently compressing and reconstructing heightmaps, which can be used for any heightmap-based approach and which I'll cover in a separate post. As an example, they managed to compress the USGS height data for the complete USA (a dataset of 40 GB) to just 355MB. As a sidenote, if I remember correctly Josh once mentioned that the new "virtual megatextures" for the Leadwerks terrain use a similar approach. Advantages: pretty fast rendering of large terrains minimal preprocessing small data size - just the heightmap, also only very few and small static buffers on the GPU good visual continuity Disadvantages: doesn't run on legacy hardware (Shader model 2.0 and below) fixed amount of triangles, so a flat plane might be rendered using thousands of triangles while a rugged mountain might be represented by large, stretched triangles CDLOD Then there is the "Continuous Distance-Dependent Level of Detail" approach (see http://www.vertexasylum.com/downloads/cdlod/cdlod_latest.pdf). I must confess I haven't looked too deeply into this. As I understand, it is somewhat of a mixture between the quadtree structure of Chunked LOD and the heightmap displacement of geo clipmapping. Real-time tessellation on the GPU Finally we're coming to the title of this blog... Tessellation on the GPU can also be used for terrain rendering. While not a "large scale" approach itself, it of course can help to improve other algorithms by doing an optimal tessellation during run-time, based on both camera position and terrain structure. In addition, the generated mesh can be made water-tight, so you don't have to deal with cracks and T-junctions. An example of such an algorithm was described by N. Tatarchuk in the Advanced ShaderX 7 book ("Dynamic Terrain Rendering on GPUs using Real-Time Tessellation"). They start with a coarse, regular mesh and then render the terrain in two passes: First they render the heightmap as a point cloud to a buffer and use that information in a second pass to optimally tessellate the terrain. OK - what's next? What I'm going to try over the next couple of blog posts is a mixture of two approaches: A geometry clipmapping approach, but with a simpler patch structure and coarser grid. Tessellation will then be used to make the mesh water-tight and create an optimally structured mesh. I will move slowly, because many required techniques are still new to me, but if this is of interest to you - stay tuned!
  10. I am not aware of any restrictions, but the code you see and edit in the shader editor isn't the whole story. For one, the version information is kept elsewhere (actually, if you open the *.shader files in a text editor, you'll see that there a several shader variants for different OpenGL versions). But more importantly, Leadwerks 3.1 uses a deferred renderer, while most shader examples you'll find in books or on the web are written for a forward renderer. While of course the GLSL syntax and basic principles are the same, there are some differences. Most notably, where at the end of a forward shader you write a value to gl_FragColor, thereby defining the final color for every fragment, in a Leadwerks 3.1 shader you fill up four buffers (fragData0 to fragData3, for their contents see that "shader uniforms" thread). After that, Leadwerk will do the lighting calculations etc. and do the final shading using those buffers.
  11. Most code for older OpenGL versions should still work under OpenGl 4. Control and evaluation are actually the two stages of the tessellation shaders, the first one defining the rules for the tessellation, the second one being executed after the tessellation has been done. Unfortunately there isn't a lot of documentation about shaders in Leadwerks. Look for a recent thread calles "Shader uniforms", there will find variable names for the values that Leadwerks passes into the shaders. I think it's best to start looking at existing shader code (like the simple diffuse shader) and modify them to get a feeling. Sorry, short answer but I only have limited internet access right now...
  12. Yes, or put differently, you create textures out of the high-poly model that make the low-poly one still looking detailed. Especially interesting for the normal map. Edit: For the baking there is also http://www.xnormal.net/1.aspx, which is free, but you need the low-poly model first.
  13. Be aware that the Turbosquid model isn't animated, only rigged, so you'd have to do that on your own.
  14. It seems shaders currently *have* to be inside the "Shaders" folder. I would consider this as a bug, too.
  15. Most mobile devices run OpenGL ES 2, with only the very last generation (iPhone 5s, some Android devices running 4.3 and upwards) running OpenGL ES 3. I think the latter one is somewhat comparable to OpenGL 3.3 (desktop), but it's definitely less powerful than OpenGL4 (e.g. no tessellation and geometry shaders). Also, I think Josh mentioned he won't be charging for the mobile add-ons in Leadwerks 3.1 because they are basically the 3.0 add-ons now running in 3.1. The interesting question will be what happens afterwards, e.g. if Josh develops a deferred renderer for mobile and then how many of the desktop shaders might also run on mobile.
  16. If I remember correctly there will a custom postprocessing pipeline which is defined at you camera, both in the editor (the label is already there under the camera properties) and in code.
  17. I don't think a full-fledged, node-based editor should be a high priority. And it shouldn't replace manual shader coding, because you will lose a lot of functionality. What would be nice, though, is something like Unity (and the UDK editor as well) does, where you can link shader uniforms to material properties and then manipulate those in the material editor. And especially sliders are really nice with this, because it's a lot easier to move the slider until the effect looks right in the material view, rather than editing floats.
  18. Ah, but that is my NdotH formula from above, isn't it? Great, thanks!
  19. I was hoping you would jump in... Oh, OK, so this means I can only control the relative contributions of the three color channels to a materials specularity, but cannot really control e.g. the size of the specular dot (meaning: if the material is shiny or matte)?
  20. I am used to a calculation of specular reflection along the lines of pow( saturate( NdotH ), specularHardness ) maybe multiplied by an additional factor. Yet, when I look into the provided shaders, I see nothing of the sort, just a multiplication of specular texture times specular color, and then some sort of packing this into frag1.a Where is my usual specular reflection?
  21. After cluttering the regular Leadwerks forums with irregular posts and screenshots about terrains, tessellation and the like, I thought it might be better to collect those in a blog where I can happily mumble to myself without distracting anyone... So what's this about? Well, from day one of my (not so long ago) foray into game development I have been interested in creating large outdoor scenes. Now, most game engines don't support the required features for that out of the box, and I guess for good reason: Rendering large outdoor areas is difficult and resource-consuming, while playing in them tends to be a bit - boring. But I still find this to be a fascinating subject, and I also have a game in mind that requires large terrains (and hopefully won't be that boring...). And Leadwerks with its open low-level APIs makes it possible to implement some of the needed features myself. And why the blog's title "Tessellate This!"? Well, the new 3.1 version of Leadwerks does not only come with a new renderer, it also provides complete access to the additional shader stages of OpenGL 4, namely the tessellation control and evaluation shaders. That is something you won't find in many other engines, and it enables the implementation of some interesting algorithms for rendering large terrains etc. On my laundry list of things that I'd like to do (and mumble about) are simple shader programming for dummies (like myself) algorithms for large terrains (not virtual globes, but scenes of about 100x100km) rendering forests atmospheres, clouds roads water, especially oceans. As I said, I'm still pretty new to game dev, so don't expect any expert advice. But if you like you might follow me stumbling around, making some mistakes so you don't have to, and maybe producing the odd nice screenshot. So, here I go - mumblemumble...
  22. OK, searched a bit further, and this seems to be common among game engines: While OpenGL supports 64bit textures, most engines just allow up to 8 bits per channel. I am calculating the normal map for a heightmap-displaced terrain in the fragment shader using central differencing, and for this 8bit just isn't enough (it produces ugly steps in the terrain). However, World Machine allows to export heightmaps in a "Povray TGA" format, where the 16 bits are spread over the red and green channels. So I went ahead and used that. As a comparison, this is using an 8bit heightmap and here I am using the 16bit/2 channel format Pretty amazing what a normal map (even a calculated one) can add, the base mesh for this terrain is just 65x65 vertices, no tessellation or anything. Since I'm calculating the normals in the fragment shader - is there a way I could render this to a texture and sample the texture later on? Using the current Lua API?
  23. My problem is not that I can't create a .raw file and import that for the Leadwerks terrain, but rather that I would like to do heightmap displacement myself. So I need to get that texture to the shader, and 8bit resolution isn't enough. I guess I could encode this in two channels, but maybe there's an easier way.
  24. Ah, you're referring to the clipmader.shader, right? This is reading from the red channel - so is this an 8bit or 16bit texture? And if the latter, what do I have to do to import and use one myself? EDIT: Sorry, talking rubbish, of course the heightmap sampling's taking place in terrain.shader. Either wasn't looking or thinking straight...
  25. OK, I see? @dude: What do you mean by .raw? Like raw image data? That doesn't work as well. @Josh: I see in the terrain.shader you get the terrain height from the second row of the entity matrix. So I guess this is where the 16bit raw data from imported heightmaps go, spread over two channels or so?
×
×
  • Create New...