Jump to content

What activates an animation - some thoughts


Roland
 Share

Recommended Posts

I have been thinking a bit about animation. Not on how to do them or implement then. That problem is solved by a couple of animation classes and the result of using iClone.

 

What I'm now ask my self is How to trigger Which animation to run in different situations. I'm then talking about specialized animation and not the norm 'idle', 'walk', 'run' and so on.

 

Lets take an example. Say we have a number of object in the game which can be picked up or kicked on. That means that the player must detected which type object it has come into contact with and then dependent on the type in some if/else or switch statement load and execute the animations suitable for that object.

 

Now what could happen of instead the objects them self would tell the player which animation to run. The pickup object would tell the player to run a pickup animation, the kick-object would tell the player to run a kick animation. In fact those object could have the animations loaded and just send them over to the player to execute either automatically or if the user pressed a certain key or clicks the mouse.

 

This would make all those if-statements redundant and new animations specially designed

for handling a certain object can be added, not by adding anything to the player, but just by ensure that the new object just sends the suitable animation over to the player.

 

I have not written any code for this yet as its still an idea. Have you guys any comments or thoughs about this approach. I'm not sure about it my self, sounds interesting but maybe it has some drawbacks hidden.

 

Thank's

AV MX Linux

Link to comment
Share on other sites

The approach that EKI One uses for this is a nice one, hence my explanation here:

 

It's basically to define Actions which NPCs can subscribe too and those Actions are tied to stage directions which are translated at the character level to actual animations. So a Walk Action would have a Walk stage direction assigned, A Run Action a Run stage direction and a Pick Up Object a Pick Up stage direction so on so forth. The Walk stage direction would then be mapped on a per model basis to the respective model animation etc.

 

The AI determines under whatever circumstances what Action should be currently performed for any given NPC, which may be as the result of FMS activity or say the result of a trigger firing etc like in your example ... proximity to an object for instance. I use a similar technique to the one your describing for weapon acquisition where the object notifies the player its a weapon and available for pickup.

 

The nice thing about assigning Actions to NPCs and their associated stage directions is the fact that it keeps everything nice and generic. Differing NPCs can share the same basic Actions and stage directions but these result in different actual animations being triggered for individual character models. So a Drink Action could have a a gentile character sipping his drink whilst the same Action on a dwarf might trigger a heavy drinking animation and the animation names do not need to follow a common naming convention.

 

The AI only ever works at this generic level passing Action instructions to a stage directions manager that translates these into actual animation calls to the NPC instances concerned.

Intel Core i5 2.66 GHz, Asus P7P55D, 8Gb DDR3 RAM, GTX460 1Gb DDR5, Windows 7 (x64), LE Editor, GMax, 3DWS, UU3D Pro, Texture Maker Pro, Shader Map Pro. Development language: C/C++

Link to comment
Share on other sites

Convention over configuration.

 

If you store the animations in a hash its no longer an if then. Store a key on the item but it is best to store the animation on the player in the event that there needs to be some hacking.

 

Just opinions.

Yes indeed. I get what you saying... more efficient just send over the key. Nice idea :)

AV MX Linux

Link to comment
Share on other sites

The approach that EKI One uses for this is a nice one, hence my explanation here:

 

It's basically to define Actions which NPCs can subscribe too and those Actions are tied to stage directions which are translated at the character level to actual animations. So a Walk Action would have a Walk stage direction assigned, A Run Action a Run stage direction and a Pick Up Object a Pick Up stage direction so on so forth. The Walk stage direction would then be mapped on a per model basis to the respective model animation etc.

 

The AI determines under whatever circumstances what Action should be currently performed for any given NPC, which may be as the result of FMS activity or say the result of a trigger firing etc like in your example ... proximity to an object for instance. I use a similar technique to the one your describing for weapon acquisition where the object notifies the player its a weapon and available for pickup.

 

The nice thing about assigning Actions to NPCs and their associated stage directions is the fact that it keeps everything nice and generic. Differing NPCs can share the same basic Actions and stage directions but these result in different actual animations being triggered for individual character models. So a Drink Action could have a a gentile character sipping his drink whilst the same Action on a dwarf might trigger a heavy drinking animation and the animation names do not need to follow a common naming convention.

 

The AI only ever works at this generic level passing Action instructions to a stage directions manager that translates these into actual animation calls to the NPC instances concerned.

So in essential each character in the game would have a list of Actions where the Action implementation (which animation to run) is triggered by a message from ... what? The picked objects or by something else.

AV MX Linux

Link to comment
Share on other sites

Essentially whatever you need to trigger it.

 

I have an example of a change of guard. In this case the trigger is simply the passage of time, after 6 hours have elapsed the AI will assign a WalkTo Action for a resting guard in the guard room which will use the path finding to move to the position supplied, that of a common point where the exchange of guards is to take place. The same Action is assigned to the currently patrolling Guard and when both reach the designated point (actually a waypoint) then the AI takes over again and issues the series of Actions required to initiate the actual handover.

 

In the case of my FPS game and the weapons acquisition scenario the simple discovery of a weapons object (via a frequently executed raycast based on line of sight) will trigger the necessary action to cause the player to acquire the weapon if so desired.

 

Its entirely up to you what mechanisms trigger what and that the AI responds to that.

Intel Core i5 2.66 GHz, Asus P7P55D, 8Gb DDR3 RAM, GTX460 1Gb DDR5, Windows 7 (x64), LE Editor, GMax, 3DWS, UU3D Pro, Texture Maker Pro, Shader Map Pro. Development language: C/C++

Link to comment
Share on other sites

Modern game AI is generally geared around Finite State Machines or Decision Trees. Both are specific testing and switching mechanisms designed to facilitate AI. Traditional programming conditional switching mechanisms (case, if then etc) become to cumbersome on their own to use for anything other than really simple AI.

  • Upvote 1

Intel Core i5 2.66 GHz, Asus P7P55D, 8Gb DDR3 RAM, GTX460 1Gb DDR5, Windows 7 (x64), LE Editor, GMax, 3DWS, UU3D Pro, Texture Maker Pro, Shader Map Pro. Development language: C/C++

Link to comment
Share on other sites

Essentially whatever you need to trigger it.

 

I have an example of a change of guard. In this case the trigger is simply the passage of time, after 6 hours have elapsed the AI will assign a WalkTo Action for a resting guard in the guard room which will use the path finding to move to the position supplied, that of a common point where the exchange of guards it to take place. The same Action is assigned to the currently patrolling Guard and when both reach the designated point (actually a waypoint) then the AI takes over again and issues the series of Actions required to initiate the actual handover.

 

In the case of my FPS game and the weapons acquisition scenario the simple discovery of a weapons object (via a frequently executed raycast based on line of sight) will trigger the necessary action to cause the player to acquire the weapon if so desired.

 

Its entirely up to you what mechanisms trigger what and that the AI responds to that.

okay.. I got it. Thanks a lot Pixel.. I got a great deal of inspiration and ideas from this

AV MX Linux

Link to comment
Share on other sites

You're welcome Roland. AI really is the place where all the fun starts, where everything we work towards finally starts to happen. I'm sure you're really going to enjoy it :)

  • Upvote 1

Intel Core i5 2.66 GHz, Asus P7P55D, 8Gb DDR3 RAM, GTX460 1Gb DDR5, Windows 7 (x64), LE Editor, GMax, 3DWS, UU3D Pro, Texture Maker Pro, Shader Map Pro. Development language: C/C++

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...