Jump to content

Pixel Perfect

Members
  • Posts

    2,110
  • Joined

  • Last visited

Posts posted by Pixel Perfect

  1. Although I have no working knowledge of networking and multi-player games I'd have thought that spawning all the players on the server was essential as it needs to simulate the game in its entirety and I can't see how it would do so without actually running the game. True the graphics output would not normally be required because there would be no requirement to view the game on the server, it simply runs the simulation on behalf of the clients and acts as the master sync so to speak.

  2. This is not actually necessary unless you have graduated levels of transparency. You can have full transparency in the main world, you just need to ensure you use the alphatest versions of the shaders in the material files. Look at the various tutorials available on the Leadwerks Wiki, there is one on materials and texturing. Or conversely look at Aggror's Leadwerks Guide.

  3. So if your character collides with an ammo pickup, locally on the client you can check that collision and instantly give the player the ammo locally, but the server is doing the same checks and will also give the player the ammo on the server and send that it did so or not to the client, and the client will always listen to the server.

    So if I understand this correctly, the client in this scenario has given the ammo to the player ahead of the sever making the decision in order to mask the lag. So what happens on the client if the sever decides it didn't collide and the player has not received the ammo? Does the client then take the ammo back again and place it back in the pickup?

     

    I've never done any networked multi-player design so this is quite intriguing!

  4. I pretty much agree with that. In the early days of Leadwerks I needed to know if I could create the kind of space scenes I needed for my STRANDED game so I developed some motion code and tested it in in a simple scene. It was all pretty basic but inherently worked and I left it at that point with the intention of pickling it up again and expanding on the ideas once I'd completed the rest of my game engine.

     

    This was the initial result:

     

    Space Scene Video (wmv)

    • Upvote 1
  5. I'd echo gamecreator's comments. C++ has no inherent networking capability you either need to write your own or use a third party library. RakNet has the best reputation in my experience.

  6. I like the idea of the speed of construction and the automated tools but with all due respect the output of that process is no where near the quality of the tree in the first tutorial. That's a seriously beautiful tree. I might have a go at producing something like that myself at some point.

    • Upvote 1
  7. on 2) Some of the road node data is currently stored in some binary format in the sbx file by the editor ( I cursed Josh when I saw this as it completely screwed up my sbx file parsing routine at the time lol). Presumably if Josh is prepared to divulge the format then it would be quite possible to procedurally output this data.

     

    or ... as Chris suggests above, feed the shader directly

  8. Modern game AI is generally geared around Finite State Machines or Decision Trees. Both are specific testing and switching mechanisms designed to facilitate AI. Traditional programming conditional switching mechanisms (case, if then etc) become to cumbersome on their own to use for anything other than really simple AI.

    • Upvote 1
  9. Essentially whatever you need to trigger it.

     

    I have an example of a change of guard. In this case the trigger is simply the passage of time, after 6 hours have elapsed the AI will assign a WalkTo Action for a resting guard in the guard room which will use the path finding to move to the position supplied, that of a common point where the exchange of guards is to take place. The same Action is assigned to the currently patrolling Guard and when both reach the designated point (actually a waypoint) then the AI takes over again and issues the series of Actions required to initiate the actual handover.

     

    In the case of my FPS game and the weapons acquisition scenario the simple discovery of a weapons object (via a frequently executed raycast based on line of sight) will trigger the necessary action to cause the player to acquire the weapon if so desired.

     

    Its entirely up to you what mechanisms trigger what and that the AI responds to that.

  10. The approach that EKI One uses for this is a nice one, hence my explanation here:

     

    It's basically to define Actions which NPCs can subscribe too and those Actions are tied to stage directions which are translated at the character level to actual animations. So a Walk Action would have a Walk stage direction assigned, A Run Action a Run stage direction and a Pick Up Object a Pick Up stage direction so on so forth. The Walk stage direction would then be mapped on a per model basis to the respective model animation etc.

     

    The AI determines under whatever circumstances what Action should be currently performed for any given NPC, which may be as the result of FMS activity or say the result of a trigger firing etc like in your example ... proximity to an object for instance. I use a similar technique to the one your describing for weapon acquisition where the object notifies the player its a weapon and available for pickup.

     

    The nice thing about assigning Actions to NPCs and their associated stage directions is the fact that it keeps everything nice and generic. Differing NPCs can share the same basic Actions and stage directions but these result in different actual animations being triggered for individual character models. So a Drink Action could have a a gentile character sipping his drink whilst the same Action on a dwarf might trigger a heavy drinking animation and the animation names do not need to follow a common naming convention.

     

    The AI only ever works at this generic level passing Action instructions to a stage directions manager that translates these into actual animation calls to the NPC instances concerned.

  11. Its a nice attempt Michael but the apples just don't work as simple images attached to planes as they look distorted and completely unnatural unless viewed from exactly the right angle, they'd be better as real 3D models with smoothing but I'm not sure how many polys that would add to the overall model.

×
×
  • Create New...