-
Posts
647 -
Joined
-
Last visited
Content Type
Blogs
Forums
Store
Gallery
Videos
Posts posted by nick.ace
-
-
Well && isn't a bit operation, but were you setting a variable with &? I wish that the code for the model shaders used bitwise operations instead of addition because I think it would make more sense.
What I was saying about subtracting was mainly if the total of the flags was less than ten. For example (using 8 bits instead of 32-bit int):
2 - 10 = -8
0000010 - 00001010 = 11111000
Now, you just checked a bunch of flags because the lighting shader uses bitwise operations to check for flags rather than subtraction.
So far, there isn't a wiki page on it, but I can make one. Also, if you or anyone else wants me to make tutorials, I can do that. IDK how useful people find them. They do take time to make, but I like doing it, just not sure how good they are because I haven't gotten a lot of feedback.
Edit:
Just created a wiki page:
- 1
-
Use the normal recalculation option in the model editor and select angular threshold lighting option. (Tools->Calculate Normals). I find that this makes objects with sharp faces look more realistic.
http://www.leadwerks.com/werkspace/page/tutorials/_/models-and-animation-r8#section4
-
Ok, my bad. The material flag has to contain a 2 to be selected. Don't subtract 10 though. You should be using bit operations or else you can go negative and cause a bunch of flags to be checked by accident. You can just set the alpha channel to 0, but I would just see what the example shaders do since they include a bunch of other useful flags. Don't comment it out though because then the lighting gets screwed up because the normals are needed for good lighting.
-
Yeah, no problem! I don't actually show how to alter the lighting shader in a tutorial, but I mentioned it in a video because it took me forever to figure out why everything was red when I was overriding Leadwerks lighting lol. That's a very Leadwerks-specific thing though.
-
Oh I see, my bad. I thought you meant that Leadwerks would have a few compiled versions and that you could set up your program using CMake.
-
To provide a cmake project Josh would have to provide the Leadwerks source code, I don't think he wants to do that.
Why? CMake just sets up projects, it doesn't compile anything. Just link the Leadwerks dll's and you should be all set.
http://stackoverflow.com/questions/17225121/how-to-use-external-dlls-in-cmake-project
-
fragData1 is the normal buffer. The alpha channel for the normal buffer is used to store flags such as selection state and decal stuff. Are there any other fragData1's being set? I thought that by default if you didn't write to it, the unselected state flag wasn't set (so it would be selected) because in my lighting tutorial I remember having to modify the lighting shader in order to prevent this from happening. Maybe that changed though.
- 1
-
Without looking at the GLSL code, it's probably not setting the material flag correctly. Look at the examples for model shaders. Without this flag explicitly set, the lighting shader will by default assume that it's "selected." This is actually how objects are colored when you select them in the editor.
- 1
-
I've used both, and I honestly don't really prefer one over another too much, but gcc imo is easier to use. Porting code might be an issue if a developer tries to support both Linux and Windows. But from a quick search:
http://stackoverflow.com/questions/31529327/c-is-it-worth-using-gcc-over-msvc-on-windows
http://stackoverflow.com/questions/8029092/gcc-worth-using-on-windows-to-replace-msvc
-
Yes, that's the only way for now.
-
The probes aren't dynamic, so you won't be able to see the player. The SSLR shader won't work because the back of the player will be culled in a third person game and hidden in a first-person game. Basically, you can't use that shader for anything that's not currently displayed on camera.
-
It's because of the problem in this tutorial:
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/
You add a bias to avoid shadow acne, but the trade off it that there's nothing you can really do about it other than increase lighting quality like gamecreator said.
-
Yeah, your method is how I would do it as well. The double rendering is too expensive, and you can customize object appearances better. Generally, you want things on your minimap to be highlighted (enemies, objects, players, etc.), which double rendering wouldn't be great at. I don't think I've ever seen an actual minimap done with the render to texture method, but I could be wrong. On thing you may want to think about though is avoiding too many draw calls to images though. I don't imagine it being an issue, but if you have hundreds or thousands of images in your minimap, you're going to be doing a ton of overdraw, and have performance issues there as well, but I would think that it's better than double rendering still. In those extreme cases, you would probably use a 2D array with some type of custom shader to do that, but for most games I don't see that being an issue.
- 1
-
Yes, that's correct.
-
But hidden entities won't cast shadows, will they?
How do you get your shadows from the first world render on top of the render from the second world?
You don't actually hide the entity. You just use a garbage shader that tosses out the fragments on the model, but use a legit shader for the shadows in the main world. In the first-person model world, you would just render the mesh with shadows again (or without, but you may need self-shadowing). You would have to render the model twice (one in each world) but it shouldn't be a huge deal.
-
Yes,
Examples:
- Traffic tools
- UI builder tools
- AI waypoint tools
- AI behavior trees
- Animation blending editor
- Road tools
- City generation
- LOD tools
- New importers/exporters
- Terrain generation
- Mesh editors
- etc.
Some of these were an issue for me specifically. Unless you want to use pivots for all of those. I couldn't even use my own traffic system at a certain point once I had 500 pivots lying around and have to connect them all through the script parameters! Besides, if you depend on a plugin, you know the risks. Also, right now you have a bunch of UI libraries because there's not a great single way to do it (let me clarify: they're all great, it's just they lack an interface in the editor, so some creative was involved). Wouldn't it be cool if all of those libraries were consolidated? The thing is that the map file format doesn't need to be different. The biggest problem is the interface for the things I listed.
That's why Workshop code could exist, so you can fix it if anything needs to be changed. Or perhaps have a separate repository for all plugins so that anyone can request pull requests. Not every tool needs to be complicated that it would break either.
Why would you have to maintain your section of MP3 code? That format doesn't change, so the decoder doesn't change. The playback shouldn't change either.
-
Why don't you just render the first-person model with a material that has shadows but discards the fragments of the model in the normal shader? Then, you get the best of both worlds.
Personally, I would do what Josh suggested though. If you decide to use environmental probes, you're going to be in trouble.
-
I would have written my Trafficwerks code as an extension if it was supported. Right now, the only thing you can do to mimic extensions is to use pivots (e.g., Aggror's GUI tool). I mean people upload to the Workshop for free all the time, and there are many downloads there. Plus you can make some cool tech demos with extensions.
-
Ah ok, I didn't know there were no hooks or anything for leaving an entity. Why not program a box intersection test yourself though? Then you'd have as much control as you would need. You would need to scan through all entities at startup and then calculate the highest +x, -x, +y -y, +z, and -z values for the vertices of a mesh.
You can remove elements from a vector. You should just swap the element you want to remove with the last element and use pop_back(). Don't use the erase() method though. A vector is just an array behind the scenes whereas a list is not.
-
I don't but I'd imagine it would be a bad idea. Apparently, there are some driver limitations, so there may be less compatibility between apps. You also have to emulate 32-bit applications (i.e. Leadwerks Editor) unlike Windows Desktop, where you run it natively.
-
Isn't the AABB box supposed to not be used for collisions? There's nothing really you can do about that. Personally, I would just use collisions for triggers. Otherwise, use some quick rectangular prism intersection test.
In terms of the loop efficiency, it's hard to determine whether that would be a good setup or not depending on the application. But there are probably better techniques to use. You could use a KD-tree so that you don't need to test every entity. IDK how costly it is to set up each frame compared to your list approach though. I wouldn't use the "list" data structure. Use "vector" instead if you are going to use a list like you are using. With "list" you are going to be building a chain of pointers, so you will have fragmented memory and get more cache misses, so your loop will be less efficient.
Also, I would avoid using GetDistance() and just square the second term. Implement your own distance function without the use of square-root.
-
I think he saw the Site license as being the only one without any restrictions. I'm kind of curious though what the different with the commercial license and the PC license is. Is it just support and extra platforms?
-
Yes, but you would need to use the C++ edition and implement the networking yourself. A popular library people here use is RakNet. Apparently, it was acquired recently by Oculus (so I guess Facebook?).
-
I think Roland's referring to the pseudo-PBR stuff that's been going on recently with Leadwerks. I don't have Mari, but try BDRF. I think that's the lighting model that PBR uses.
Adjusting the heights / scale of vegetation?
in Game Artwork
Posted
Yes, you can do this in the vertex shader of the vegetation shader. Just scale each vertex position by a certain amount before any transformations.