Jump to content

Rastar

Members
  • Posts

    421
  • Joined

  • Last visited

Posts posted by Rastar

  1. I haven't tried Mixamo stuff in LE, but you might something in this thread

     

    http://www.leadwerks.com/werkspace/topic/11449-fbx-animations-not-working/page__hl__mixamo

     

    However, I'm pretty sure the crawler animations won't work on your Mixamo characters. Animations are created for a specific bone structure (skeleton), and the bone hierarchies, scales and orientations of the crawler and a Mixamo character are certainly different. You would have to retarget the crawler animations to your character.

  2. I was trying to create some test spheres for shading experiments with no textures assigned, just fixed material colors. It seems they still use textures from some other material. E.g., if I create a new empty material in MyGame and assign it the diffuse+normal shader, it will show a concrete preview in the material editor. When creating a brush sphere in the introduction scene, assigning it the material and running the game, it is rendered using the tire material

  3. Ahh, I see.... I guess we're referring to different things here: you to DCC applications and textures, me to shaders and the G buffer. The material flags I was mentioning are stored in the alpha channel of the G buffer's fragData2 - per Pixel, not per material. By "0 or 1 anyway" I meant that since I only use 6 bits for metalness it can only have 64 different values but that doesn't hurt so much since it usually is set to 1 (metal) or 0 (insulator).

  4. There is no more specular in PBR, only roughness and metalness combinations , i you are tempting some PBR system.

     

    Yes, I know, what I meant is that I can reuse that channel to either store roughness or metalness.

     

    By the way, I finally understood how the best fit normals work, and I guess that means I can't compress those to two channels since the vectors aren't normalized. So I'll probably go ahead and just store the metalness in the upper 6 bits of the material flags channels, since that value is usually 0 or 1 anyways.

  5. Hej shadmar, thanks fpr the tip! Yes, that would work, although I actually just need one single extra channel since I can reuse the specularity in the normal's alpha. I got the normal compression working for the non-BFN case (looking OK, though I guess some additional bits in the normal buffer would help), still having artifacts for BFN and not don't understand why, but I won't budge... ;-)

  6. Ahhh, got it... It's better to kick out the y component and store xz, since that is usually the largest component of a normal (since it's mostly pointing upwards) and therefore introduces a large error if stored with one of the smaller ones.

  7. I would like to stuff an additional parameter into the G buffer. The only place I can thin of (without adding an additional texture like fragData3) is to encode the normals compress the normals to two channels and reconstruct the in the lighting shaders. However, I can't seem to get this working without artifacts, and I am unsure if this is due to the compression algorithms. Question: Is the normals texture (fragData1) 8bit per channel or 16bit per channel? Could the multisampling somehow get in my way here?

  8. Hi,

     

    for a post effect I need to reconstruct the pixels' camera position from the depth buffer. I am a little confused since Leadwerks seems to be doing some things differently than textbook OpenGL, so a few questions I have:

     

    I find this snippet e.g. in the directionallight.shader (and also a little different in klepto's sky shader)

     

    screencoord = vec3(((gl_FragCoord.x/buffersize.x)-0.5) * 2.0 * (buffersize.x/buffersize.y),((-gl_FragCoord.y/buffersize.y)+0.5) * 2.0,depthToPosition(depth,camerarange));
     screencoord.x *= screencoord.z / camerazoom;
     screencoord.y *= -screencoord.z / camerazoom;
     screennormal = normalize(screencoord);
     if (!isbackbuffer) screencoord.y *= -1.0;
    

     

    Now I get the depthToPosition function to calculate the z value. However, the x and y coordinates confuse me:

    1. First of all, it seems that Leadwerks uses the y position in the backbuffer inverted compared to the front buffer, therefore those isbackbuffer -> * -1 lines (here and also in other shader parts)?
    2. Is that also the reason that the y coordinate uses 0.5 - gl_FragCoord.y rather than gl_FragCoord.y - 0.5?
    3. The first line basically seems to compute normalized device coordinates, but why is the x coordinate multiplied by buffersize.x/buffersize.y?
    4. This code seems to assume a symmetric FOV with equal FOV values for x and y, correct?

  9. With the new vehicle physics, ocean water and upcoming vegetation painting - why not an outdoor racing game? Would give Josh some feedback on the new features, and maybe he'll come up with the road tool sooner rolleyes.gif. Maybe we could also give multiplayer a shot. And of course, I love outdoor stuff...

  10. Yes, he said it would be soldiers. The idea is to have some packs that can be used together, so someone fighting all those zombies is needed. Josh was confident that there will be a lot more content in the future, and also that he might approach some artists from other places like the Unity Asset Store and convince them there is money to be made with Leadwerks as well...

  11. I am trying to bind a custom C++ class to Lua - for the first time, so I'm probably doing something wrong...

     

    I have a simple test class

     

    #pragma once
    class ABC
    {
    public:
    ABC();
    };
    

     

    and its binding file (luacommands.pkg)

     

    $#include "ABC.h"
    class ABC
    {
    ABC();
    };
    

     

    I run the processor from here http://www.leadwerks.com/werkspace/files/file/216-tolua/ with the command

     

    tolua++.exe" -o gluecode.cpp luacommands.pkg
    

     

    then include the gluecode.cpp in my build (which compiles). But if I now start the game, I get an access violation right away (even though the class isn't used anywhere).

     

    First-chance exception at 0x5150C332 (lua51.dll) in TestGame.debug.exe: 0xC0000005: Access violation reading location 0x0000001C.
    

     

    What am I doing wrong? Or is this not working in the beta build?

     

    EDIT: This only happens if I add the Bloom shader (which is also Lua-based) to the camera. So I guess its Lua binding somehow interferes with mine. However, I want to use my class binding in a postprocessing step, so I need a way around this issue.

     

    EDIT2: Nope, this happens if I add any Lua script to an entity in the scene.

  12. Hi Roland, thanks for answering! Yes, that should work for entities loaded with the map, but I would also like to hook into events for dynamically created entities. And since I am writing some sort of generic, reusable C++ library that doesn't know about entity creation by other parts of the code, I cannot simply hard-code a hook registration after e.g. a call to Model::Create(). So I was looking for a possibility to generically hook into events for any entity, not just map-based ones. I would like to write some information about every entity in the scene after it has been rendered (using DrawHook) into GPU buffers, for later usage in the post-processing.

     

    Is there such a registration call? I am thinking of something like World::EntityInstantiatedHook (which obviously doesn't exist).

  13. Not sure if this is a bug/inconvenience, or if something is wrong with my machine...

     

    I am trying to work with the Sponza scene as provided by CryTek (http://www.crytek.com/cryengine/cryengine3/downloads). The mesh isn't exactly small (some 280k tris, with 26 materials), but not monstrous either. That mesh takes really long to load in the model editor - about 5 minutes, during which one core is busy (AMB 8-core, 16GB RAM, GTX750Ti, Win8.1). Is that normal? UU3D only needs about 5 secs to open it.

    • Upvote 1
  14. I would like to have a hook function getting called for every entity in the scene, without explicitly registering it for every entity. There is a hook in the Map::Load function, but my guess is that would not work for dynamically created entities (?)

     

    Is there a method for that?

  15. That's interesting, though I don't fully understand why that would be. By "spherical harmonics shading" you mean classical IBL like Marmoset Skyshop, or some different usage? To my understanding these two techniques would be complementary rather than substitutes, because IBL deals with shading/shadows by indirect lighting, whereas PBR tries to better mimic a material's real reaction to incoming light, no matter if direct or indirect.

     

    Certainly you can get equivalent results from traditional and PBR approaches for a specific lighting situation. What I like about the PBR stuff is that I as a non-artist can produce assets that work pretty well in any lighting situation without really knowing what I'm doing ;-)

     

    I didn't ask because I really need this like right now (I'm actually doing more prototypical work in Leadwerks because it's so accessible, but currentx develop my game with another engine). I just think that this is becoming mainstream and it will become more difficult to buy and/or create assets for use in a non-PBR engine. And sooner or later the day will come when some user buys PBR assets, imports them into Leadwerks and complains that its rendering isn't up to snuff because the asset looks better somewhere else.

     

    As for the implementation in Leadwerks: First this would need some technical additions, like conversion of textures to linear space and back to gamma (using the GPUs hardware samplers, this could be a checkbox in the texture editor) and the possibility to use higher precision frame buffers for the shading calculations. I think both are very cheap to add and also help in other situations (like rendering of skies, or doing home-grown IBL). The PBR shaders could maybe be provided by the community, e.g. by porting some stuff over from Lux, or from scratch. But that probably wouldn't work out of the box since the lighting calculations are done behind the scenes, and all we can do in the shaders is write to the G-buffers. But maybe even that change wouldn't take too much effort. But all in all I think there is a lot of bang for the buck in this.

     

    Anyways, I agree that some other, more gameplay-related features are more pressing for most.

×
×
  • Create New...