Jump to content

Niosop

Members
  • Posts

    1,124
  • Joined

  • Last visited

Blog Comments posted by Niosop

  1. Haven't worked w/ LE in a while, so there might be some reason I don't remember why this wouldn't work, but how I always handle deforming clothing is just to have every possible deforming clothing item weighted to the same armature and exported in the same file as the character, but as different objects. Then in the engine just disable all the clothing that they are not wearing. I've used this method before and am using it right now for a game I'm working on and it works great.

  2. The main advantage TCP gives is "guaranteed delivery". Packets you send via TCP will be processed in the order they were sent (packet reordering) and will be retransmitted if lost. Also, having a TCP stream associated with a client allows you to do authentication when the stream is established and not worry about it after that. It also makes spoofing the source IP much, much harder due to TCP's three way handshake.

     

    Everything TCP does can be implemented on top of UDP application side, but eventually you end up just remaking TCP, but with all the additional stuff having to be handled in your application instead of by the networking stack.

     

    Many game libraries use UDP and selectively re-implement those features of TCP that they need, such as discarding late packets or requesting retransmittal of lost packets.

     

    It comes down to using the right tool for the job. Often a mixture of the two is the correct approach.

  3. There's no way you can account for every possible use case in the general release editor, and even trying would result in an unusable mess with a bunch of features that most people have no use for.

     

    Say I want to select a bunch of empty game objects, either exported from my modeling program or manually placed. I want to select an arbitrary number of models and have it randomly place one of those models at each of the empty positions, as a child of that empty, based on a seed value I provide, deleting any existing children. It should then randomly rotate each of these models about an axis I specify, scale them between a min and max value I provide, and then drop them onto the ground using either physics or a raycast. These should be placed at edit time, not runtime, so that I can use that as a base and then manually tweak them.

     

    The LUA script to do this isn't difficult at all, but having some way to run this script on demand (a menu item or button I can click) and an API so I can create an interface where I can select the empties I want to use, the models I want to use, the random seed, the axis of rotation and the min/max scale factors to use, and finally a "Run" button is what makes it powerful and painless to use.

     

    This might or might not be useful enough to be included in the general editor, but I don't want to wait for you to implement it (no matter how fast you are) when I could spend 5 minutes and do it myself. And what if I didn't want to select the empties myself, but just have it select any object with a name that starts with "PH1"? Being able to customize the editor to fit the needs of a particular project is a MASSIVE workflow enhancer. On any given project I usually have a half dozen or so editor scripts that are unique to that project and would be totally useless in any other project, but save me a ton of time on that particular project. Anything from automatically assigning materials to characters based on name to automatically assigning scripts to game objects based on name or some other criteria.

     

    I love the LE engine. But I don't really use it due to workflow issues. LE3 is addressing the biggest of these (asset importing), but editor scripting is right up there with it.

  4. It would be great if the actual editor was extensible via LUA so community members could extend/enhance the tools

    It's kind of frightening to me that people are already unsatisfied with a tool that isn't even done. What needs to be fixed?

     

    As you know, I'm a big fan of both LE and Unity. As I see it there's a few reasons why Unity has gained popularity.

     

    1) Multi-platform deployment. LE3 takes care of this, and if it's all included in the base license then it's even better than Unity which charges separately for Android and iPhone/iPad deployment licenses. Only thing missing is web deployment in LE but it's not a huge deal to me.

     

    2) Component based programming. LUA scripting in LE does this and it's great for making reusable and share-able components.

     

    3) Excellent art pipeline. LE3 is addressing this and looks really promising. This is a huge one for me.

     

    4) Being able to script and extend the editor. This has been instrumental in the huge ecosystem of 3rd party plugins available for Unity. It's not what we think is missing right now, but what we may find to be missing later. You'll always be behind feature-wise if you try and do everything yourself. While I agree with your earlier post about small dev teams being more agile than big teams, neither can really compete with crowd sourcing when everyone is trying to scratch their own particular itch in their area of expertise. A quick example:

     

    Unity terrain sucks, so I wrote a mesh terrain painting system and shader set. I have a giant multi-million poly mesh terrain with cave entrances and overhangs, wrote a Blender script to break it into smaller chunks to stay under the 64k vert limit of Unity and allow frustum and occlusion culling to be of use, then import them all. I then need to assign the same material to hundreds or thousands of chunks. I can either spend a ton of time dragging materials onto each chunk, or write a quick editor script that allows me shift select all the chunks and pick my "Multi Object Material Assigner" menu item, pick the material and have the script assign the material to all the selected objects at edit time. I also write an asset postprocessing script that looks at the name of the object being imported and sets it up to create colliders, not import animations and not create materials for anything with "chunk" in the asset name, as well as scale the import. It's very specific to my needs, so it's not really something I would expect in the general editor, but it saves me many hours of tedious work and makes me happy. The "Multi Object Material Assigner" script is general enough to be usable elsewhere, so I often drop it into other projects where it's needed and instantly have a menu item for it.

     

    For the same system, I need to average the vertex normals of the edge verts so there isn't a visible seam between chunks. The number of verts to be modified isn't that many, but searching through them to find those that need to be does take a while (many minutes on a medium sized terrain). So doing it at runtime isn't an option, instead I have an editor script that searches through them, records the object, vert index, calculated normal and tangent in a file I specify, then at runtime I can just load that file and apply it in less than a second.

     

    TL;DR - Being able to customize the editor to make it have features specific to your project is a really good thing.

  5. I'd say get in touch with this guy: http://www.arongranberg.com/unity/a-pathfinding/ and see if he'd want to do it.

     

    His Unity implementation is pretty nice, offers good performance and a bunch of features including grid based, point based, imported navmesh and recast based navmesh generation.

     

    I like having the option of using different types, as navmeshes are very performant if you don't need to change it at runtime, but grid based is a lot faster to update if you need to dynamically add obstacles at runtime.

     

    Multithreading is also a huge plus, since I usually don't care if a path takes a couple frames to solve, as long as it doesn't impact framerate while it's solving.

     

    Anyways, just my two cents, which isn't worth much in this economy.

     

    P.S. I *think* recast works by voxelizing the poly mesh then generating a navmesh that fits that. But that's just the impression I got from reading something somewhere and may be totally incorrect.

  6. When you do go for some first round funding, I'd consider spending it on some middleware licensing and integration. Even if you got a couple additional programmers, it would still take forever to implement stuff that's already out there.

     

    Something like CloakWorks

    could probably be had for very cheap as it's just starting up.

     

    Scaleform or another GUI middleware would also be very nice to have. An integrated lightmapper could do wonders as well. You already have one of the best dynamic lighting systems out there, but it's never going to compare to precalculated lighting. Combining the two could give awesome performance and quality gains.

     

    The main problem I have w/ LeadWerks at the moment is just workflow. Having to close the editor to reimport an asset, run separate tools to do file conversions, etc is a turn off. But I think you were already planning on addressing this in 3.0.

     

    Anyways, keep on keeping on. I look forward to what Leadwerks can become and will probably pick up a 3.0 beta license once it's available depending on the state of the editor.

  7. Looks kind of cool, but the effect seems a little odd. Like a bubble of darkness behind objects as you look at them. But the darkness always stays on the other side of an object no matter what direction you look at it from, regardless of what's around it. If you had a flashlight it might look correct, but w/ lighting that isn't moving you don't expect the lighting to change just because you move.

     

    But it does look pretty cool.

  8. Programming really takes a certain mind set. You have to have the ability to analyse problems, break the problems down into smaller problems, and come up w/ a step by step procedure (algorithm) for solving these problems. The real question is "Do you enjoy programming?" If the answer is no, then it doesn't matter if you're any good at it or not, it's a waste of time and life to do things you don't enjoy. If you prefer modeling, then focus on that and join forces w/ someone like me who enjoys programming but sucks at the artistic stuff. I enjoy all aspects of game development, but I'll probably never get anything done because I spread my time among too many aspects to really become "good" at any of them.

     

    Anyways, in short, do what you enjoy. As long as you are enjoying it, it's not wasted time, even if you never master whatever it is.

  9. Yay, a breakthrough. After finding and discarding several libraries, and then starting to write my own implementation, I've found the best thing since sliced bread (as far as Ogg/Theora is concerned). libtheoraplayer! Look it up on SourceForge, makes everything SOOOO simple. Will release a video of it in action later once I have something worth releasing. Although w/ that library I'm sure some of you could release something much sooner.

×
×
  • Create New...