Jump to content

wailingmonkey

Members
  • Posts

    101
  • Joined

  • Last visited

Posts posted by wailingmonkey

  1. Particle deformers:

    - Non-linear: bend, taper/flare, sine, squash, twist, wave

    - free-form cage deformer (customizable matrix)

     

    - sticky particles (with 'sticky-ness' customizer, gravity editor, and ability to splat/radiate from contact point---hello gib-tastic particles!)

     

     

    *edit* forgot to add, trail particles that could potentially be based off of a spline tool similar to road tool

    (not just 'trailing' off of a geometry's animated motion) ;)

  2. Also consider it's not just bone hierarchy...how the model was envelope-weighted/skinned per vertex will have

    a bearing on how each model deforms when animated. Differences in mesh topology and just general silhouette/scale

    can have great affect, unless you can figure out a way to create a 'cage' that is universal to all of your assets as well

    as a universal bone hierarchy scheme that serves as an intermediary.

     

    So yeah, not really a trivial task, and that's why even the big boy DCC products don't do it all that well (3dsMax, Maya, Softimage

    ...although Softimage has some advanced tools in there that can do pretty well like GATOR/MOTOR). You might have a look into

    MotionBuilder as well, since it's pretty industry-standard in dealing with mocap (or already animated bone hierarchies).

  3. I'd say you'll need to edit your smoothing angle in UU3D to whatever angle you've got in Max. Also,

    looks like you've set your texture to repeat a certain amount in max, but your UVs are not actually

    scaled to that degree on the 0-1 UV space when exported, so once they get into UU3D you'll need

    to set their repeats to match what you did in Max. (I don't use Max, so sorry to not be of more help...

    there may be a more elegant export option someone else can assist with)

     

    Or perhaps an easier solution might be to use Arbuz's exporter?

     

    ;)

  4. perhaps consider affecting the normal as well in coordination with the specular?

    (thinking puddles tend to wash out --sorry for pun-- the bump detail they're sitting on top of...although

    I do like the gradient effect of the second image around the shiny edges--the puffy outer edge

    seems off though)

     

    Seems like you'd definitely need multiple UV-channel support for anything affected by your rain shader.

     

    (I'd guess there is probably a UDK shader example out there somewhere too)

  5. Also, keep in mind that collada (DAE) is Autodesks format. I think it's safe to assume that their importers and exporters would work properly. Blender's DAE is bugged to hell.

    Brent, perhaps you mean .FBX is Autodesk's format?

    Unless things changed and I missed it (not being sarcastic, quite possible),

    Collada was a direct counter-option to AD's .fbx.

     

    http://en.wikipedia.org/wiki/COLLADA

     

    (as a side-note, I've been using Silo since, uh...2005-06? for 95% of game models

    you'd be hard-pressed to 'need' anything more...but everyone's different, of course)

  6. if I read your last 2 sentences correctly then, the actual order of placement of arguments (marker and constant) along the node links

    makes no difference to 'Platform' in how it interprets the value? (I'm thinking of this as a variation to the way a render tree in XSI

    works...there, you have open slots to plug your nodes into, like diffuse/specular/transparency etc.---with what you're showing above,

    all available 'slots' that can be plugged into 'Button' lie on the link between it and it's target node, 'Platform'. Same basic concept,

    just different means to send values/properties, yes?)

  7. I seen it before and looks nice but the animations are kind of expensive.

     

    not trying to pick a fight with you, tournamentdan (I think you've got some

    idea how much effort/energy/experience goes into making 'good' art)

    but those animations seem to be about $25 or less. If you built your pipeline around

    their naming conventions/rig you'd save a huge amount compared to paying an animator.

     

    It just baffles me how there's this perception that art should somehow be less costly

    than other things that take an equal amount of skill, knowledge/experience, and time

    to create.

     

    I'm currently going to Animation Mentor and I can tell you for a FACT that a reasonably

    polished run cycle will take waay more than 1 hour's work to make look decent...

    (gets off soapbox)

  8. was hoping for something like Softimage's ICE:

    (early implementation, disregard audio) :lol:

    Have a look around youtube for additional vids on Softimage ICE...there's fluid sim nodes, particle sim nodes, etc.

     

    Is there a reason LE 'nodes' couldn't be self-contained complex logic? I would think there could be some compelling reasons

    to allow the system to accept higher-level functionality...namely, a more desirable means for merchants (programmers) to create

    and profit from their effort. Maybe somebody has a Tessendorf wave 'node' they'd like to create and sell so folks like Red October

    (or anyone dealing with an ocean-based game) could have that option to purchase and 'plug-in' to their ocean geometry...

     

    Frankly, being able to have a standardized means of offering these node plug-ins would seem to be a huge boost in enhancing the

    ability for games to be made with LE. (Animation-blending/Managing node, Substance texture integration node, ocean nodes, sky nodes,

    ragdoll node, particle node, car-physics node, cut-scene/camera node .... etc, etc.)

  9. when I said 'just' I was referring more to the isolation of the mouth (and not taking whole face into account). ;)

     

    Also, the general opinion (and opinions vary, of course) from animators that do facial work seems to be that

    animating based on strictly phonemes kills 'real' believable facial performance (since we don't say every word with

    an emphasis on forming the precise phoneme...shapes get flattened and lots of things roll into the next...just like

    we don't enunciate every syllable equally, it depends on the word, emphasis, etc.)

     

    Regarding the keyframes...that's why I think just importing in the model that contains the vert anim data is the best

    way to go (FBX/Collada/GMF) since Maya/Max/Softimage/etc. have whole toolsets for managing the animation. No need for

    some editor in LE to create it...but a way to manage it once inside LE would seem much more optimal and save

    duplicating what's already available.

  10. hey Rick...ideally (and just thinking out loud what would work for me, so not fully fleshed out), it would be an interface that imports

    additional blendshapes/morphs. A GUI window (LUA script tool?) that allows me to associate 1 .obj/.gmf as my base, and

    a number of additional .obj's/.gmf's as blendshapes/morphs (all same vertex count). So if, for instance, I've got a

    rigged character with head parented to body, I select the default head. I then import in my additional heads with

    specific facial poses (done in any program that allows vertex manipulation/sculpting). I can name them accordingly,

    or their naming is acquired on import from the file itself.

     

    Even more powerful, but much bigger scope, would be the ability to keyframe the linear interpolation between the two shapes.

     

    But 'easier' (take with salt grains) would be to fully recognize the keyframes done in an outside package that are vertex-based

    on the mesh/geometry itself, so FBX/Collada would be the format. (suppose the option to use Arbuz's 3dsMax exporter for .gmf,

    and UU3D's .gmf exporter for 'native' .gmf support would be alternatives too).

     

    Having the ability to recognize vertex anim from outside packages via FBX/Collada/GMF means no additional keyframing tools

    are needed inside the editor. So if that were the case, a tool to manage the vertex animations would probably need to follow...

     

     

    *edit* Just read your edit...you're talking about just lip-syncing with pre-defined phonemes (mouth

    shapes for the sounds made in forming words). That type of lip-sync works for quick-n-dirty, but

    doesn't believably 'convince' that characters are truly speaking...good book is the Jason Osipa

    'Stop Staring' one for getting a better understanding of it. Bottom line is that there's more involved

    than just mouth shapes, and doing every phoneme isn't realistically going to be a magic bullet for

    what the whole face is doing when talking. (it's a pretty deep topic, too much to just summarize here)

    But since it's 'games' we're talking about, the fidelity of 'real' generally gets watered down anyhow

    in favor of FPS. ;)

  11. thanks for the input, Mike.

     

    that sounds good that 2100 verts doesn't seem to be an issue, but

    again, I'm guessing it may vary depending on how they are managed. (I'm no programmer):)

     

    This guy's head is 1920 verts and the image shows 1740 selected in the main face areas.

    I'd imagine including some of the adam's apple as well, but for 180 more verts, probably just

    have the whole head verts tracked. Additionally, no geometry is really cut-in for wrinkle

    areas (like nasolabial fold or forehead, etc.) so that would add in probably 200 or so more.

     

    1740verts.jpg

     

    Anyhow, I'd personally love to see somebody put together something that artists could manage 'blendshapes/morphs'

    or the control of applying them to select geometry. Something like that could have other applications

    for in-game use like mud/melting/lava/morphing...not just face shapes.

     

    Maybe when flowgraphs come into play it will be more feasible for us non-programmers. (crosses-fingers)

  12. at one point Josh said he had vertex animation implemented and has touted that it

    is a 'doable' option for someone to program in currently. (there's a longish thread

    somewhere stressing it's utility, as well as another thread where Josh was requesting

    face models to experiment with)

     

    The bonus of having this option is that it's fairly 'easy' to create specific blendshapes

    (morphs) in Zbrush/3DCoat/Mudbox if you are not a master face rigger.

     

    So vertex animation would be another option for facial animation besides a bone rig

    (also to get muscle-bulge/skin-stretch/correcting-poor-deformation-areas), should some

    motivated programmer choose to look into it... :)

     

    (I have heard that it is costly on FPS, depending on how it's set up...so separate head/body

    would also seem efficient in this case as well)

     

    Here's some UDK info onnit: http://www.udk.com/features-animation.html (down near the bottom)

×
×
  • Create New...