Jump to content

Mumbles

Members
  • Posts

    691
  • Joined

  • Last visited

Posts posted by Mumbles

  1. That doesn't cover issues like if I allocate a char buffer or something in a function.

     

    Wrap calls to malloc, that's easy

     

    As part of what I'm doing:

     

     

    // this is the call back for allocation newton memory
    void * NewtonMalloc (int sizeInBytes)
    {
    TotalNewtonAllocated += sizeInBytes;
    #ifdef DEBUG_NEWTON_MALLOC
    std::cout << "Newton malloc'd " << sizeInBytes << " bytes. (Total: " << TotalNewtonAllocated << ")n";
    #endif
    return malloc (sizeInBytes);
    }
    
    // this is the callback for freeing Newton Memory
    void NewtonFree (void * ptr, int sizeInBytes)
    {
    TotalNewtonAllocated -= sizeInBytes;
    #ifdef DEBUG_NEWTON_MALLOC
    std::cout << "Newton freed " << sizeInBytes << " bytes. (Remaining: " << TotalNewtonAllocated << ")n";
    #endif
    free (ptr);
    }

     

    realloc is more of the same, as is delete. new is a little more complicated though, as you would probably find yourself writing one function per object type, and on top of that, 1 per overloaded constructor. Unless I'm missing something

  2. That doens't explain why I can send 14 bytes or 114 bytes with send(), and get not a millisecond difference in transfer speed. It would mean, that I have to manage when I call the send() function, after I have a big enough string to send (or binary data).

     

    Never do that for games. No client wants to be kept hanging around waiting for physics updates, simply because "there's less overheads if we wait 100 milliseconds for a few more updates". Send them instantly or else your client side synchronisation will be horrible.

     

     

    If you're employing the same idea I am, "full" physics updates go via TCP, and they must arrive timely. They're sent only once a second, so any going missing would be a disaster - I might increase it to 2 per second. But delta updates for dead reckoning instead go via UDP, and it's not a serious showstopped if one arrives late (or never at all).

     

     

    For everything but games, yes, wait for data to reduce the overheads, but for games, ignore it and send instantly.

     

     

    Obviously, it's not very effective to send small packets, because each sending takes FPS, and it would be much more efficient to send larger packets, since the FPS drop is the same.

     

    It would be a good idea to place the networking in its own thread - possibly more than one thread even. For a server, I'd say a minimum of one thread per client

     

     

    As for sending data, the RTT is all that's really important. You don't need to know how quickly 100 MB transfers from the sender station to its receiver. But if it's taking 200 milliseconds for absolutely any packet to be received, in a game that ticks 25 times a second, the data is 5 ticks out of date by the time the client receives it. So if the server overrides any predicted physics, there could be 5 ticks worth to recalculate, which a player has a good chance of visibly seeing, and it negatively affecting their playability.

     

    On the other hand, someone who has a ping of 30 could completely catch up and be synchronised with the server, resulting in any server enforced correction being barely noticeable at all...

  3. Now I think I have all tests done, and can make a super fast network library. Apparently it needs a send queue, where it collects data for a packet before it sends it, to maximize the speed.

     

    That's the default for winsock TCP sockets...

     

    If you want to disable that, and send instantly:

     

     

    BOOL bOptVal = TRUE;
    int bOptLen = sizeof(BOOL);
    int iResult = setsockopt(/*Your socket handle*/,IPPROTO_TCP,TCP_NODELAY,(char *)&bOptVal,bOptLen);

     

    pinched straight out of the MSDN...

     

    I presume those silly capitalised BOOL and TRUE are defined in winsock2.h. However intellisense seems to be having one of those days for me where it doesn't want to tell me

  4. SDL_net does rely on SDL. Under windows that does include a dependancy on DirectX being installed.

     

    Perhaps I was reading the documentation for the Linux version (although if that's the case, why haven't I found the doc for the Windows build? I mean, I use SDL for joystick handling!). I mean, SDL did start as a Linux library didn't it?

     

    SDL_net relying on SDL... I didn't know that, but it equally sounds plausible. The only part I had a hard time believing was the SDL and Direct X bit. Although thinking about it, I think I remember you in the past telling me how it uses DirectInput for its joystick handling on Windows, hence why it has the silly triggers thing when using 360 controllers in Windows, but doesn't in Linux. Such that (off topic) I eventually programmed my controller class to use both SDL and XInput for joystick support (although, only one at a time per controller), just on the off chance that if a 360 controller was used in Windows, both the triggers would work correctly.

  5. knowing Mumbles, she's a hardcore Debian fan (just like me)

     

    not enough to use the testing build. As you know, Debian users on the stable build are few and far between, but I might have to get razor qt manually because that's a while off going stable, but upgrading from KDE 3 to KDE 4 was the biggest mistake ever.

  6. I don't know how the terrain system works. But on the off chance it uses Newton:

     

    What is the world size?

    Is the point on the terrain inside the world boundary?

    Does the raycast start inside the world boundary?

    Does the raycast end inside the world boundary?

     

    The first two of those are the most crucial, as only bodies inside the world can trigger the (hidden) "prefilter" and "process hit" callbacks.

  7. This is probably an issue many of you have overlooked, but both the wiki, and (certainly for C) the headers only list values for keys you would expect to find on a US keyboard. Those of us with British keyboards are probably (in fact almost certainly) in a minority, but it's irritating when you're hard coding a specific button for the console. Yes, I know hard coding is normally considered bad practice, but it's normally considered OK for the console button not to be redefinable.

     

    The irritation comes from the fact that on a standard US keyboard, the key next to the number 1 is ` and with shift held down, it's ~ and so you can check for it by using KeyHit(KEY_TILDE).

     

    Unfortunately, that's not the case for British keyboards, our ~ is next to the enter key, instead, our key next to the number 1 generates:

     

    ` on it's own

    ¬ when pressed with shift

    ¦ when pressed with Alt Gr (That's another key that US keyboards don't have... It's in place if the right alt, but doesn't behave the same way). Some keyboards show it as a solid bar, with the split bar on the backslash. It may be down to OS, but Windows XP, when set to the English UK keyboard, produces the split bar

     

    (As a side note, UK users might have noticed the Euro currency sign on the 4 key, shared with the dollar sign, again, that's produced with the Alt Gr key. European vowels with accents are produced the same way too (áéíóú although it doesn't work with the Spanish letter ñ), but since Leadwerks can't test for secondary or tertiary key functions, that's just a bit of a "did you know")

     

    Back on track, the key I wanted the number for, isn't listed in the header files so I had to make a quick for loop to see which key it was, and I've done the same for the other problematic key: The backslash, which is in a totally different place on our keyboards when compared to US keyboards (ours is next to the left shift)

     

    enum
    {
    KEY_UK_SPLITBAR = 223,
    KEY_UK_BACKSLASH = 226
    };
    

     

    There is also an irregularity:

     

    The hash (#) key, which also produces a tilde (~) when shift is pressed is tested with the number 222, which is KEY_QUOTES

     

    The quote (') key, which also produces an at sign (@) when shift is pressed is tested with the value 223, which is KEY_TILDE

     

    As you see, these two are in the wrong order on British keyboards. Not exactly a show stopper, but could cause some confusion if you're someone's game with hard coded keys and they don't seem to respond, or if a game has a text input capability and the input doesn't seem to match.

  8. The other alternative is to remove the name from the enum, since it is already encapsulated by the class

     

    class aLabel
    {
       public:
       enum
       {
           TP_Center = 0,
           TP_TopLeft,
           TP_TopRight,
           TP_BottomLeft,
           TP_BottomRight
       };
       int textPosition;
    };
    

     

    switch(textPosition)
    {
    case TP_Center:
       //Blah blah - do stuff
       break;
    case TP_TopLeft:
       //More blah blah-ing
    }
    

     

     

    If the switch occurs outside an aLabel function, then just change the "case TP_Center:" to "case aLabel::TP_Center:" and obviously change the switch as well to either, (object instance).textPosition, or if text position isn't public (object instnace).getTextPosition()

  9. KeyHit and KeyDown are both available in the C DLL and they behave exactly as they would in Blitz Max (KeyPressed isn't available and MouseButtonDown is now MouseDown(int button)). It would make sense if Josh's TKeyHit(int key) command was just a single line wrapper function

     

     

    return KeyHit(key)

     

     

    so, if lua has access to the same commands, then it should work, then again, disclaimer -- I don't use lua, which is why I'm sticking with your method of of NOT-ing a variable... (which I'd never actually thought of before).

     

     

     

    if KeyHit(KEY_1)==1 then
    questionScreen=questionScreen*-1
    end

     

     

    I myself, don't use Blitz (or lua), but I learned the distinction between hit and down whilst in my earlier years using Blitz 3D, and indeed Dark Basic had it too, except back in my DB times, I couldn't fathom the difference between "object hit" and "object collision", which worked in much the same way and KeyHit and KeyDown respectively, even though it was there in plain English in the help page.

  10. you've gotta make the toggle happen once and prevent the toggle from happening further during the key being pressed in order for the toggle to work properly...

     

    --Mike

     

    The solutions provided already take this into account. They use KeyHit which (in Blitz) returns the number of times the specified key has been pressed since it was last checked. I think in LE 2 it's been reworked to simply be 1 if the key has been released and pressed again since the last check, and 0 if it's either not pressed, or still held down from the last check...

     

    However your point would be true if KeyDown was used since that simply returns 1 if the key is currently pressed, regardless of whether it is still being held down from last time, or a totally unique keypress.

     

    The flickering is more likely due to declaration and initialisation inside the main game loop (which was identified by Pixel Perfect), when it should be outside the game loop, but that wasn't explicitly stated when the solution was given.

  11. Personally, I prefer raycasting for all projectiles (bullets and rockets).

     

    If realism isn't important and you don't care about bullet drop, wind or air resistance then it's a lot faster, and for bullets, you don't even need to draw it - you can just draw a spark and a bullet hole where it hit.

×
×
  • Create New...