Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Facial animation in game?


Recommended Posts

The only way to make facial animation right now in LE is to make it with bones. So actually everything depends on how you do it (the number of bones, vertexes, animation length). It doesn't differs from usual animation and in general I think it won't make any impact on the performance. The main problem is to make such animation. Its very hard process. Also you will need some way to mange this animation in your game.

Link to post
Share on other sites

I have a model with an animated mouth and face to simulate simple speech and facial expression. It runs beautifully in LE2 with no hit on performance at all.

 

 

What kind of format is the animation? Is there any way I can see a pic of the rig. I can not find to many good examples.

Link to post
Share on other sites

It's straight forward skeletal animation Dan but I'm completely perplexed as to how the animator has managed to get so much movement from the rig I'm looking at when I open it in fragmotion. It appears to be a pretty standard rig except for an additional bone behind the mouth region. To get the amount of movement and facial expression the animator has to have done something pretty amazing with the weightings. I think ArBuZ was accurate with his observations, this is clever stuff! Pic of wireframe and skeleton below:

 

post-51-0-94644300-1305840483_thumb.jpg

Link to post
Share on other sites

It's straight forward skeletal animation Dan but I'm completely perplexed as to how the animator has managed to get so much movement from the rig I'm looking at when I open it in fragmotion. It appears to be a pretty standard rig except for an additional bone behind the mouth region. To get the amount of movement and facial expression the animator has to have done something pretty amazing with the weightings. I think ArBuZ was accurate with his observations, this is clever stuff! Pic of wireframe and skeleton below:

 

post-51-0-94644300-1305840483_thumb.jpg

 

Thanks pixel, I imagined more bones in the face. So that will take some tinkering with.

 

Thanks again Everybody.

Link to post
Share on other sites

at one point Josh said he had vertex animation implemented and has touted that it

is a 'doable' option for someone to program in currently. (there's a longish thread

somewhere stressing it's utility, as well as another thread where Josh was requesting

face models to experiment with)

 

The bonus of having this option is that it's fairly 'easy' to create specific blendshapes

(morphs) in Zbrush/3DCoat/Mudbox if you are not a master face rigger.

 

So vertex animation would be another option for facial animation besides a bone rig

(also to get muscle-bulge/skin-stretch/correcting-poor-deformation-areas), should some

motivated programmer choose to look into it... :)

 

(I have heard that it is costly on FPS, depending on how it's set up...so separate head/body

would also seem efficient in this case as well)

 

Here's some UDK info onnit: http://www.udk.com/features-animation.html (down near the bottom)

Link to post
Share on other sites
Guest Red Ocktober

vertex animation doesn't seem to take too much of a hit... i'm doing it on the cpu in the real water waves thingee i've been making so much about, and it seems to work well... i'm shifting around over 2100 vertices without too much pain...

 

the only possible issue i see is actually knowing which vertices to target... after that's solved, vertex manipulation i would imagine should be pretty straightforward...

 

below is the real time wave movement code using vertex manipulation....


'--------------------------------------------------  ANIMATE THE WAVES IF WATERMODE 
'
Function AnimateWaves(theWater:TMesh)
  Local vertex:TVec3[2181]
  Local theWaves:TSurface
  Local waveFreq:Float

  theWaves=GetSurface(theWater,1)

   For Local i:Int =0 To 2177
     Local waveFreq:Float=MilliSecs()/16
     vertex[i]=GetVertexPosition(theWaves,i)
     vertex[i].y=Sin( waveFreq+vertex[i].x*50 + vertex[i].z*30)*(GAME.seaState*4)
     SetVertexPosition (theWaves,i,vertex[i])
   Next 

EndFunction

 

--Mike

Link to post
Share on other sites

thanks for the input, Mike.

 

that sounds good that 2100 verts doesn't seem to be an issue, but

again, I'm guessing it may vary depending on how they are managed. (I'm no programmer):)

 

This guy's head is 1920 verts and the image shows 1740 selected in the main face areas.

I'd imagine including some of the adam's apple as well, but for 180 more verts, probably just

have the whole head verts tracked. Additionally, no geometry is really cut-in for wrinkle

areas (like nasolabial fold or forehead, etc.) so that would add in probably 200 or so more.

 

1740verts.jpg

 

Anyhow, I'd personally love to see somebody put together something that artists could manage 'blendshapes/morphs'

or the control of applying them to select geometry. Something like that could have other applications

for in-game use like mud/melting/lava/morphing...not just face shapes.

 

Maybe when flowgraphs come into play it will be more feasible for us non-programmers. (crosses-fingers)

Link to post
Share on other sites

Are you thinking along the lines of letting artists do a kind of vert animation via an editor that could be saved to file and played back on the specific model via code?

 

Like you move the verts to where you want (from some editor), snap a keyframe, then rinse and repeat and have the playback code interpolate between keyframes?

 

 

I read somewhere about facial animations and how they predefined facial positions to spoken letters and then you would provide the text of the sound file and the code would read the text and build the facial positions required to speak what is in the text interpolating between the letter positions for the verts defined for each letter.

 

The artist would just have to provide snapshots of the verts for each facial expressions one makes when saying letters (or groups of letters since some letters share facial expressions).

Link to post
Share on other sites

hey Rick...ideally (and just thinking out loud what would work for me, so not fully fleshed out), it would be an interface that imports

additional blendshapes/morphs. A GUI window (LUA script tool?) that allows me to associate 1 .obj/.gmf as my base, and

a number of additional .obj's/.gmf's as blendshapes/morphs (all same vertex count). So if, for instance, I've got a

rigged character with head parented to body, I select the default head. I then import in my additional heads with

specific facial poses (done in any program that allows vertex manipulation/sculpting). I can name them accordingly,

or their naming is acquired on import from the file itself.

 

Even more powerful, but much bigger scope, would be the ability to keyframe the linear interpolation between the two shapes.

 

But 'easier' (take with salt grains) would be to fully recognize the keyframes done in an outside package that are vertex-based

on the mesh/geometry itself, so FBX/Collada would be the format. (suppose the option to use Arbuz's 3dsMax exporter for .gmf,

and UU3D's .gmf exporter for 'native' .gmf support would be alternatives too).

 

Having the ability to recognize vertex anim from outside packages via FBX/Collada/GMF means no additional keyframing tools

are needed inside the editor. So if that were the case, a tool to manage the vertex animations would probably need to follow...

 

 

*edit* Just read your edit...you're talking about just lip-syncing with pre-defined phonemes (mouth

shapes for the sounds made in forming words). That type of lip-sync works for quick-n-dirty, but

doesn't believably 'convince' that characters are truly speaking...good book is the Jason Osipa

'Stop Staring' one for getting a better understanding of it. Bottom line is that there's more involved

than just mouth shapes, and doing every phoneme isn't realistically going to be a magic bullet for

what the whole face is doing when talking. (it's a pretty deep topic, too much to just summarize here)

But since it's 'games' we're talking about, the fidelity of 'real' generally gets watered down anyhow

in favor of FPS. ;)

Link to post
Share on other sites

But since it's 'games' we're talking about, the fidelity of 'real' generally gets watered down anyhow

in favor of FPS.

 

Right, I assume the programmers kind of have to pull the artists back some and make them realize this is all to run real-time and so shortcuts have to be made.

 

 

you're talking about just lip-syncing with pre-defined phonemes

 

Well, I wouldn't say "just" as it's a pretty big deal in games still and LE doesn't have this functionality yet.

 

 

So if, for instance, I've got a

rigged character with head parented to body, I select the default head. I then import in my additional heads with

specific facial poses

 

That seems strange to me. You are essentially making keyframes from entirely different models (even though it's basically the same model with verts moved around it's still a different model in your example). Why not just allow vert manipulation and take a snapshot? You would save disk space by just storing the different verts from keyframe to keyframe.

Link to post
Share on other sites

when I said 'just' I was referring more to the isolation of the mouth (and not taking whole face into account). ;)

 

Also, the general opinion (and opinions vary, of course) from animators that do facial work seems to be that

animating based on strictly phonemes kills 'real' believable facial performance (since we don't say every word with

an emphasis on forming the precise phoneme...shapes get flattened and lots of things roll into the next...just like

we don't enunciate every syllable equally, it depends on the word, emphasis, etc.)

 

Regarding the keyframes...that's why I think just importing in the model that contains the vert anim data is the best

way to go (FBX/Collada/GMF) since Maya/Max/Softimage/etc. have whole toolsets for managing the animation. No need for

some editor in LE to create it...but a way to manage it once inside LE would seem much more optimal and save

duplicating what's already available.

Link to post
Share on other sites

Using a flow graph to control verts is interesting and I can see how it can or could get expensive when it comes to performance.

 

I have not messed with it yet but have done a little reading on shape keys in blender and accentually what you are doing is creating a library of different face shapes. Happy face,sad face, mean face ect... Each individual shape key is just another model . I bought a tutorial through blender and it came with all of the model files. I was playing around with one of their characters and noticed that it was eating up a lot of memory. Did some digging into the files and found that the library of shape keys were most of the data. Around 17.MB So this would probably not be ideal for real time either.

 

Some times(most times) I like to be lazy and find the easiest ways to do things. Yesterday I found

nice motion capture software. It would take a lot of playing around but you could probably get some nice results.
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...