Jump to content

wailingmonkey

Members
  • Posts

    101
  • Joined

  • Last visited

Everything posted by wailingmonkey

  1. when I said 'just' I was referring more to the isolation of the mouth (and not taking whole face into account). Also, the general opinion (and opinions vary, of course) from animators that do facial work seems to be that animating based on strictly phonemes kills 'real' believable facial performance (since we don't say every word with an emphasis on forming the precise phoneme...shapes get flattened and lots of things roll into the next...just like we don't enunciate every syllable equally, it depends on the word, emphasis, etc.) Regarding the keyframes...that's why I think just importing in the model that contains the vert anim data is the best way to go (FBX/Collada/GMF) since Maya/Max/Softimage/etc. have whole toolsets for managing the animation. No need for some editor in LE to create it...but a way to manage it once inside LE would seem much more optimal and save duplicating what's already available.
  2. hey Rick...ideally (and just thinking out loud what would work for me, so not fully fleshed out), it would be an interface that imports additional blendshapes/morphs. A GUI window (LUA script tool?) that allows me to associate 1 .obj/.gmf as my base, and a number of additional .obj's/.gmf's as blendshapes/morphs (all same vertex count). So if, for instance, I've got a rigged character with head parented to body, I select the default head. I then import in my additional heads with specific facial poses (done in any program that allows vertex manipulation/sculpting). I can name them accordingly, or their naming is acquired on import from the file itself. Even more powerful, but much bigger scope, would be the ability to keyframe the linear interpolation between the two shapes. But 'easier' (take with salt grains) would be to fully recognize the keyframes done in an outside package that are vertex-based on the mesh/geometry itself, so FBX/Collada would be the format. (suppose the option to use Arbuz's 3dsMax exporter for .gmf, and UU3D's .gmf exporter for 'native' .gmf support would be alternatives too). Having the ability to recognize vertex anim from outside packages via FBX/Collada/GMF means no additional keyframing tools are needed inside the editor. So if that were the case, a tool to manage the vertex animations would probably need to follow... *edit* Just read your edit...you're talking about just lip-syncing with pre-defined phonemes (mouth shapes for the sounds made in forming words). That type of lip-sync works for quick-n-dirty, but doesn't believably 'convince' that characters are truly speaking...good book is the Jason Osipa 'Stop Staring' one for getting a better understanding of it. Bottom line is that there's more involved than just mouth shapes, and doing every phoneme isn't realistically going to be a magic bullet for what the whole face is doing when talking. (it's a pretty deep topic, too much to just summarize here) But since it's 'games' we're talking about, the fidelity of 'real' generally gets watered down anyhow in favor of FPS.
  3. thanks for the input, Mike. that sounds good that 2100 verts doesn't seem to be an issue, but again, I'm guessing it may vary depending on how they are managed. (I'm no programmer) This guy's head is 1920 verts and the image shows 1740 selected in the main face areas. I'd imagine including some of the adam's apple as well, but for 180 more verts, probably just have the whole head verts tracked. Additionally, no geometry is really cut-in for wrinkle areas (like nasolabial fold or forehead, etc.) so that would add in probably 200 or so more. Anyhow, I'd personally love to see somebody put together something that artists could manage 'blendshapes/morphs' or the control of applying them to select geometry. Something like that could have other applications for in-game use like mud/melting/lava/morphing...not just face shapes. Maybe when flowgraphs come into play it will be more feasible for us non-programmers. (crosses-fingers)
  4. at one point Josh said he had vertex animation implemented and has touted that it is a 'doable' option for someone to program in currently. (there's a longish thread somewhere stressing it's utility, as well as another thread where Josh was requesting face models to experiment with) The bonus of having this option is that it's fairly 'easy' to create specific blendshapes (morphs) in Zbrush/3DCoat/Mudbox if you are not a master face rigger. So vertex animation would be another option for facial animation besides a bone rig (also to get muscle-bulge/skin-stretch/correcting-poor-deformation-areas), should some motivated programmer choose to look into it... (I have heard that it is costly on FPS, depending on how it's set up...so separate head/body would also seem efficient in this case as well) Here's some UDK info onnit: http://www.udk.com/features-animation.html (down near the bottom)
  5. "but I think without a visual display of everything you have access to, it's not much fun." this is the ultimate reason why programming and me don't mix...not enough immediate visual feedback as to what changes mean in getting at the end-result I'm seeking. So the bottom line for me personally is: Bring on the Flowgraphs! (and the power to tinker with their internal properties and observe how the objects they are attached to get affected in realtime)
  6. haha... (now humming benny hill theme song in me head) Nice work on this, Pixel!
  7. "3dCoat uses p-tex which is not very usable if you want to paint your model in a 2d program." it is 1 auto-UV option, of 2...you don't have to leave it as ptex, you can use the programs other options to UV-map your object as you would in other software (placing seams, and choosing the unwrap style that best fits: ABF LSCM, Planar). The other auto-UV option is the developer's own, and was meant to be a quick way to import an object and get to painting fast---by no means is it as optimal as creating your own UV layout within the program.
  8. omid3098...pretty sure Josh has this code 'done' but just not currently implemented in the engine for us (if I read you correctly, you're talking about vertex animation support---based on morph targets from our 3d package of choice?).
  9. way to keep rockin' it, Aggror! (and thanks for the additions, macklebee)
  10. ArBuZ...there's an SDK for it too......
  11. (chuckle) ... I use my mouse's right mouse button for forward motion and left mouse button fer killin'
  12. Here's some definitions (see image). These are a mix of how 3DCoat handles blending layers (Modulate 2X, Add), and a description of how Photoshop handles layers.....in this sense, not sure how LW has it implemented, but a starting place for you, I guess.
  13. very impressive, Gandi!! Looks like even a monkey like me could make it work.
  14. that looks awesome, Pixel! I don't recall from watching your vids, but is there a 'buffer' property that one can assign, for instance, if you wanted to allow somewhat 'close' proximity to a tree? Keep up the good werk, dood!
  15. no worries, Icare...just thought I'd throw it out there.
  16. fantastic, Icare! (downloading now) this is perhaps a bit much, but have you thought about adding additional shaders such as hair and SSS? Here's an example of someone who's made realtime CGFX shaders for Maya...maybe it will stimulate some ideas for you?
  17. looking better and better with each addition! Great work, Icare!
  18. I never adopted WASD (and HATE having no ability to customize it if that's the default) and I usually ended up 1st-3rd place in the old quake2 CTF days... I use: z = move left (strafe) x = move right (strafe) c = crouch v = backwards space = jump shift(left) = use item If there's a bunch of additional functionality needed, I'll assign keys to 'b, f, g, r' (and I'll put 'a' and 's' as lean-left, lean-right if they're available)
  19. Saving your original out at 4K might be considered wise, Roland, but I'd be more apt to be using main character textures at closer to 1K for body and *maybe* 1K for head. 2K if you have a lot of close-up capability. This will really depend on what your camera will be showing (and the dominant perspective you intend to run throughout the game). Otherwise, you're wasting texture space on things that, when seen in-game, won't support the higher resolution. (I'm not stalking you, Roland...you just happened to ask questions I thought I could help with today)
  20. haven't use LE in ages, but from 2.26 I used: shader="abstract::mesh_diffuse_bumpmap.vert","abstract::mesh_diffuse_bumpmap_specular.frag" (don't know if that's correct for animated meshes, tho, nor if things have changed since multiple updates) Regarding specular, I always added it via 'TextureTool.exe' into the alpha of the normal map, then saved as .dds.
  21. wow!...wow. huge respect, Pixel Perfect! watched all 3 vids you've posted so far and I can say I'd love to have this capability in editing and setting up AI. Can't wait to see how you implement NPC logic choices based upon waypoints and/or player status (will you throw randomization control in there too?... like 80% accurate to a certain choice/action, 20% accurate, etc.) excellent, excellent work, my friend!
  22. thanks for sharing, TylerH...someday I may get to the point where it's helpful!
×
×
  • Create New...