Jump to content

timeGetTime: identifier not found?


Josh
 Share

Recommended Posts

I am getting this error when I use timeGetTime():

error C3861: 'timeGetTime': identifier not found

 

I am including <windows.h> and using it to create a Win API window. I also included winmm.lib in the "Project Linker>Input>Additional Dependencies" setting.

 

I searched my Visual Studio folder for winmm.lib but nothing was found. I might not have the PlatformSDK installed, so now I am looking for that...but the link to download it is dead, and the search on their site doesn't return any useful results!:

http://www.microsoft.com/msdownload/platformsdk/sdkupdate/

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

VS 2008 Express.

 

I am having better results with the C clock() function. On my system, the resolution is 1000 ticks per second.

 

The QueryPerformanceCounter() function I used earlier might be causing some time inaccuracies for some people's systems, according to the stuff I am reading.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Why do you use windows API functions, when there is also a cross-platform version available?

You will only make the engine more bloated and buggy that way :blink:

Use (double)clock()/(double)CLOCKS_PER_SEC for millisecond measurements, it's completely cross-platform, and it's always much more accurate than Windows API functions.

Ryzen 9 RX 6800M ■ 16GB XF8 Windows 11 ■
Ultra ■ LE 2.53DWS 5.6  Reaper ■ C/C++ C# ■ Fortran 2008 ■ Story ■
■ Homepage: https://canardia.com ■

Link to comment
Share on other sites

On my system the winmm.lib is in the Platform SDK folder.

 

The latest download (iso image) that I could find is here: Microsoft Windows SDK for Windows 7 and .NET Framework 4 (ISO)

 

I have been using the QueryPerformanceCounter() function in my code for some years and have not experienced any issues but as you suggest there are certainly indications around that there can be problems on multi processor systems with this call.

Intel Core i5 2.66 GHz, Asus P7P55D, 8Gb DDR3 RAM, GTX460 1Gb DDR5, Windows 7 (x64), LE Editor, GMax, 3DWS, UU3D Pro, Texture Maker Pro, Shader Map Pro. Development language: C/C++

Link to comment
Share on other sites

I am having better results with the C clock() function. On my system, the resolution is 1000 ticks per second.

 

Well Josh, don't feel too bad, You already knew more than me. I never even knew clock() existed! All this time I've been using GetTickCount() you know, the one that only updates once every 16 ms on win32... :blink:

LE Version: 2.50 (Eventually)

Link to comment
Share on other sites

Josh.

 

You probably have this defined before the include of windows.

I guess in stdafx.h

 

#define WIN32_LEAN_AND_MEAN             // Exclude rarely-used stuff from Windows headers
// Windows Header Files:
#include <windows.h>

 

remove the WIN32_LEAN_AND_MEAN line and you will be fine

 

 

//#define WIN32_LEAN_AND_MEAN             // Exclude rarely-used stuff from Windows headers
// Windows Header Files:
#include <windows.h>

AV MX Linux

Link to comment
Share on other sites

Roland is right. I read about the LEAN AND MEAN define statement, but thought it was something in your program source, and didn't know where else to look for it. Good to know.

 

If clock() has milliseconds accuracy on all machines, I'll stick to that one.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Roland is right. I read about the LEAN AND MEAN define statement, but thought it was something in your program source, and didn't know where else to look for it. Good to know.

 

If clock() has milliseconds accuracy on all machines, I'll stick to that one.

You have to be aware that using any of mentioned functions only gives you a best precision of 15 milliseconds.

All those clock, timeGetTime, GetSystemTime etc... is updated on each NT clock interval, which is 15 milliseconds.

 

You may read a more detailed description on how to get a bit better results in this article

Implement a Continuously Updating, High-Resolution Time Provider for Windows

AV MX Linux

Link to comment
Share on other sites

That article is quite useless since it doesn't even mention the clock() command.

On all machines I've tried, clock() the smallest millisecond accuracy on the machine.

 

I made a test program which shows that Windows API calls use the same resolution as clock().

However clock() is more accurate since it starts with 0, so there will be no double/float precision errors when measuring the result:

#include "stdio.h"
#include "time.h"
#include "windows.h"
int main()
{
double t1,t2,tt;
for(int z=0;z<10;z++)
{
	t1=clock();
	t2=clock();
	// wait for smallest change (15ms on windows)
	while(t2<=t1)t2=clock();
	tt=(t2-t1);
	printf("%17.8f %17.8f %17.8f\n",
		t1/(double)CLOCKS_PER_SEC,
		t2/(double)CLOCKS_PER_SEC,
		tt/(double)CLOCKS_PER_SEC);
}
printf("-------------------------------------------------------\n");
for(int z=0;z<10;z++)
{
	t1=GetTickCount();
	t2=GetTickCount();
	// wait for smallest change (15ms on windows)
	while(t2<=t1)t2=GetTickCount();
	tt=(t2-t1);
	printf("%17.8f %17.8f %17.8f\n",
		t1/1000.0,
		t2/1000.0,
		tt/1000.0);
}
return 0;
}

Output:

       0.00000000        0.01500000        0.01500000
      0.01500000        0.03100000        0.01600000
      0.03100000        0.04600000        0.01500000
      0.04600000        0.06200000        0.01600000
      0.06200000        0.07800000        0.01600000
      0.07800000        0.09300000        0.01500000
      0.09300000        0.10900000        0.01600000
      0.10900000        0.12500000        0.01600000
      0.12500000        0.14000000        0.01500000
      0.14000000        0.15600000        0.01600000
-------------------------------------------------------
  22676.37500000    22676.39000000        0.01500000
  22676.39000000    22676.40600000        0.01600000
  22676.40600000    22676.42100000        0.01500000
  22676.42100000    22676.43700000        0.01600000
  22676.43700000    22676.45300000        0.01600000
  22676.45300000    22676.46800000        0.01500000
  22676.46800000    22676.48400000        0.01600000
  22676.48400000    22676.50000000        0.01600000
  22676.50000000    22676.51500000        0.01500000
  22676.51500000    22676.53100000        0.01600000

Ryzen 9 RX 6800M ■ 16GB XF8 Windows 11 ■
Ultra ■ LE 2.53DWS 5.6  Reaper ■ C/C++ C# ■ Fortran 2008 ■ Story ■
■ Homepage: https://canardia.com ■

Link to comment
Share on other sites

That article is quite useless since it doesn't even mention the clock() command.

On all machines I've tried, clock() the smallest millisecond accuracy on the machine.

 

I made a test program which shows that Windows API calls use the same resolution as clock().

However clock() is more accurate since it starts with 0, so there will be no double/float precision errors when measuring the result:

#include "stdio.h"
#include "time.h"
#include "windows.h"
int main()
{
double t1,t2,tt;
for(int z=0;z<10;z++)
{
	t1=clock();
	t2=clock();
	// wait for smallest change (15ms on windows)
	while(t2<=t1)t2=clock();
	tt=(t2-t1);
	printf("%17.8f %17.8f %17.8f\n",
		t1/(double)CLOCKS_PER_SEC,
		t2/(double)CLOCKS_PER_SEC,
		tt/(double)CLOCKS_PER_SEC);
}
printf("-------------------------------------------------------\n");
for(int z=0;z<10;z++)
{
	t1=GetTickCount();
	t2=GetTickCount();
	// wait for smallest change (15ms on windows)
	while(t2<=t1)t2=GetTickCount();
	tt=(t2-t1);
	printf("%17.8f %17.8f %17.8f\n",
		t1/1000.0,
		t2/1000.0,
		tt/1000.0);
}
return 0;
}

Output:

       0.00000000        0.01500000        0.01500000
      0.01500000        0.03100000        0.01600000
      0.03100000        0.04600000        0.01500000
      0.04600000        0.06200000        0.01600000
      0.06200000        0.07800000        0.01600000
      0.07800000        0.09300000        0.01500000
      0.09300000        0.10900000        0.01600000
      0.10900000        0.12500000        0.01600000
      0.12500000        0.14000000        0.01500000
      0.14000000        0.15600000        0.01600000
-------------------------------------------------------
  22676.37500000    22676.39000000        0.01500000
  22676.39000000    22676.40600000        0.01600000
  22676.40600000    22676.42100000        0.01500000
  22676.42100000    22676.43700000        0.01600000
  22676.43700000    22676.45300000        0.01600000
  22676.45300000    22676.46800000        0.01500000
  22676.46800000    22676.48400000        0.01600000
  22676.48400000    22676.50000000        0.01600000
  22676.50000000    22676.51500000        0.01500000
  22676.51500000    22676.53100000        0.01600000

I don't know what new this brings to the subject. 15 msec is the resolution, as you also show in your example.

 

About that article I have no further comments, its written by Johan Nilsson at Swedish Space Center.

AV MX Linux

Link to comment
Share on other sites

The new thing is that clock() starts with 0 when the program starts, and GetTickCount() starts from when the computer was started.

So when you compare seconds as float or double, especially with float you will get accuracy problems very soon.

Also new is that on Linux clock() has 10ms accuracy, while the Windows commands don't work at all on Linux, they don't exist in ANSI standard C++:

       0.00000000        0.01000000        0.01000000
      0.01000000        0.02000000        0.01000000
      0.02000000        0.03000000        0.01000000
      0.03000000        0.04000000        0.01000000
      0.04000000        0.05000000        0.01000000
      0.05000000        0.06000000        0.01000000
      0.06000000        0.07000000        0.01000000
      0.07000000        0.08000000        0.01000000
      0.08000000        0.09000000        0.01000000
      0.09000000        0.10000000        0.01000000

Ryzen 9 RX 6800M ■ 16GB XF8 Windows 11 ■
Ultra ■ LE 2.53DWS 5.6  Reaper ■ C/C++ C# ■ Fortran 2008 ■ Story ■
■ Homepage: https://canardia.com ■

Link to comment
Share on other sites

The new thing is that clock() starts with 0 when the program starts, and GetTickCount() starts from when the computer was started.

So when you compare seconds as float or double, especially with float you will get accuracy problems very soon.

Also new is that on Linux clock() has 10ms accuracy, while the Windows commands don't work at all on Linux, they don't exist in ANSI standard C++:

       0.00000000        0.01000000        0.01000000
      0.01000000        0.02000000        0.01000000
      0.02000000        0.03000000        0.01000000
      0.03000000        0.04000000        0.01000000
      0.04000000        0.05000000        0.01000000
      0.05000000        0.06000000        0.01000000
      0.06000000        0.07000000        0.01000000
      0.07000000        0.08000000        0.01000000
      0.08000000        0.09000000        0.01000000
      0.09000000        0.10000000        0.01000000

Ok. I see. But still thats a small thing compared to the fact that you get time i 15 msec step using function thats may mislead you to believe that you have a real 1 msec timer. That's my point. About clock() and Linux I have no knowledge but if it is like you say that means that clock() different resolutions in Linux and Windows. That would make it a bad candidate then. Good research Lumooja.

AV MX Linux

Link to comment
Share on other sites

There must be a way, surely. If I put a Sleep(1) in my code, surely windows is not using a hi-res timer for that?

Sleep(1) will give a sleep time between 0 and 15.6 milliseconds.

If you test with a loop of Sleep(1) you will get results varying between those values,

Precision is 15 milliseconds on Windows, no matter what method that i used.

There are way to trim this to about 10 msec, but 1 msec... No.

Then you can always start tinkering around with using multimedia timers that uses

the hardware clock in the sound-card to make things better. But that does not always

work either. So in the end. 15 msec precision is what you have.

 

There may be some method to solve this that I don't know about.

In that case I would be very pleased as this is a reoccurring problem

in routines sending audio over internet which is something I work with right now.

AV MX Linux

Link to comment
Share on other sites

That sounds bizarre because a 15 msec measurement is enormous in rendering time. But it wouldn't surprise me.

 

How am I able to measure 1-2 msec time elapses? Wouldn't all my measurements be rounded off to the nearest 15 msecs?

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

BlitzMax has 1ms accuracy, so there must be a better way still:

SuperStrict
Local t1:Double, t2:Double, tt:Double;
For Local z:Int=0 To 9
	t1=MilliSecs();
	t2=MilliSecs();
	While(t2<=t1)
		t2=MilliSecs();
	Wend
	tt=t2-t1;
	Print t1/1000:Double+" "+t2/1000:Double+" "+tt/1000:Double;
Next
End

Output:

5836.3750000000000 5836.3760000000002 0.0010000000000000000
5836.3760000000002 5836.3770000000004 0.0010000000000000000
5836.3770000000004 5836.3779999999997 0.0010000000000000000
5836.3779999999997 5836.3789999999999 0.0010000000000000000
5836.3789999999999 5836.3800000000001 0.0010000000000000000
5836.3800000000001 5836.3810000000003 0.0010000000000000000
5836.3810000000003 5836.3819999999996 0.0010000000000000000
5836.3819999999996 5836.3829999999998 0.0010000000000000000
5836.3829999999998 5836.3840000000000 0.0010000000000000000
5836.3840000000000 5836.3850000000002 0.0010000000000000000

Ryzen 9 RX 6800M ■ 16GB XF8 Windows 11 ■
Ultra ■ LE 2.53DWS 5.6  Reaper ■ C/C++ C# ■ Fortran 2008 ■ Story ■
■ Homepage: https://canardia.com ■

Link to comment
Share on other sites

I found some code on Microsoft site:

http://msdn.microsoft.com/en-us/library/dd743626(v=VS.85).aspx

http://msdn.microsoft.com/en-us/library/dd757629(VS.85).aspx

 

Now you have 1ms accuracy in C++ too!

Still need to test it on Linux.

#include "stdio.h"
#include "time.h"
#include "windows.h"
#pragma comment(lib,"winmm.lib")

int main()
{
#define TARGET_RESOLUTION 1         // 1-millisecond target resolution

TIMECAPS tc;
UINT     wTimerRes;

if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) 
{
    // Error; application can't continue.
}

wTimerRes = min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax);
timeBeginPeriod(wTimerRes);

// Actually the above is not needed, since the following command alone works also,
// but maybe it's still better to use the above, who knows how Microsoft has
// designed Windows:
//
// timeBeginPeriod(1);

double t1,t2,tt;
for(int z=0;z<10;z++)
{
	t1=clock();
	t2=clock();
	while(t2<=t1)t2=clock();		// wait for smallest change (15ms on windows)
	tt=(t2-t1);
	printf("%17.8f %17.8f %17.8f\n",
		t1/(double)CLOCKS_PER_SEC,
		t2/(double)CLOCKS_PER_SEC,
		tt/(double)CLOCKS_PER_SEC);
}
printf("-------------------------------------------------------\n");
for(int z=0;z<10;z++)
{
	t1=GetTickCount();
	t2=GetTickCount();
	while(t2<=t1)t2=GetTickCount();	// wait for smallest change (15ms on windows)
	tt=(t2-t1);
	printf("%17.8f %17.8f %17.8f\n",
		t1/1000.0,
		t2/1000.0,
		tt/1000.0);
}
printf("-------------------------------------------------------\n");
for(int z=0;z<10;z++)
{
	t1=timeGetTime();
	t2=timeGetTime();
	while(t2<=t1)t2=timeGetTime();	// wait for smallest change (1ms on windows)
	tt=t2-t1;
	printf("%17.8f %17.8f %17.8f\n",
		t1/1000.0,
		t2/1000.0,
		tt/1000.0);
}
return 0;
}

 

Output:

       0.00000000        0.01500000        0.01500000
      0.01500000        0.03100000        0.01600000
      0.03100000        0.04600000        0.01500000
      0.04600000        0.06200000        0.01600000
      0.06200000        0.07800000        0.01600000
      0.07800000        0.09300000        0.01500000
      0.09300000        0.10900000        0.01600000
      0.10900000        0.12500000        0.01600000
      0.12500000        0.14000000        0.01500000
      0.14000000        0.15600000        0.01600000
-------------------------------------------------------
   7210.39000000     7210.40600000        0.01600000
   7210.40600000     7210.42100000        0.01500000
   7210.42100000     7210.43700000        0.01600000
   7210.43700000     7210.45300000        0.01600000
   7210.45300000     7210.46800000        0.01500000
   7210.46800000     7210.48400000        0.01600000
   7210.48400000     7210.50000000        0.01600000
   7210.50000000     7210.51500000        0.01500000
   7210.51500000     7210.53100000        0.01600000
   7210.53100000     7210.54600000        0.01500000
-------------------------------------------------------
   7210.53400000     7210.53500000        0.00100000
   7210.53500000     7210.53600000        0.00100000
   7210.53600000     7210.53700000        0.00100000
   7210.53700000     7210.53800000        0.00100000
   7210.53800000     7210.53900000        0.00100000
   7210.53900000     7210.54000000        0.00100000
   7210.54000000     7210.54100000        0.00100000
   7210.54100000     7210.54200000        0.00100000
   7210.54200000     7210.54300000        0.00100000
   7210.54300000     7210.54400000        0.00100000

Ryzen 9 RX 6800M ■ 16GB XF8 Windows 11 ■
Ultra ■ LE 2.53DWS 5.6  Reaper ■ C/C++ C# ■ Fortran 2008 ■ Story ■
■ Homepage: https://canardia.com ■

Link to comment
Share on other sites

Yeah, but by default timeGetTime() is set to 15ms accuracy, so you need to change the timer accuracy to 1ms using timeBeginPeriod(1) (or better the complex formula using devicecaps, to make sure 1 is allowed).

 

There also another funny thing: when you change the accuracy with timeBeginPeriod() to 1ms, it doesn't change until after 2ms delay, but that should not be a real problem, since it can be done when the engine starts.

Ryzen 9 RX 6800M ■ 16GB XF8 Windows 11 ■
Ultra ■ LE 2.53DWS 5.6  Reaper ■ C/C++ C# ■ Fortran 2008 ■ Story ■
■ Homepage: https://canardia.com ■

Link to comment
Share on other sites

I found an article how to do it on Linux too:

http://stackoverflow.com/questions/588307/c-obtaining-milliseconds-time-on-linux-clock-doesnt-seem-to-work-properly

 

Now you have 1ms accuracy on Windows and Linux!

#include <sys/time.h>
#include <stdio.h>

long timeGetTime();
long system_starttime=timeGetTime();
long timeGetTime()
{
   struct timeval start;
   long mtime;
   gettimeofday(&start, NULL);
   mtime = (1000*start.tv_sec + start.tv_usec/1000.0) + 0.5 - system_starttime;
   return mtime;
}

int main()
{
double t1,t2,tt;
for(int z=0;z<10;z++)
{	
	t1=timeGetTime();
	t2=timeGetTime();
	while(t2<=t1)t2=timeGetTime();
	tt=t2-t1;
	printf("%17.8f %17.8f %17.8f\n",
		t1/1000.0,
		t2/1000.0,
		tt/1000.0);
}
return 0;
}

Output:

       0.00000000        0.00100000        0.00100000
      0.00100000        0.00200000        0.00100000
      0.00200000        0.00300000        0.00100000
      0.00300000        0.00400000        0.00100000
      0.00400000        0.00500000        0.00100000
      0.00500000        0.00600000        0.00100000
      0.00600000        0.00700000        0.00100000
      0.00700000        0.00800000        0.00100000
      0.00800000        0.00900000        0.00100000
      0.00900000        0.01000000        0.00100000

Ryzen 9 RX 6800M ■ 16GB XF8 Windows 11 ■
Ultra ■ LE 2.53DWS 5.6  Reaper ■ C/C++ C# ■ Fortran 2008 ■ Story ■
■ Homepage: https://canardia.com ■

Link to comment
Share on other sites

Oh cool, the same Linux article says in the end, it works the same way on Mac too.

So now you have 1ms accuracy on Windows, Linux, Mac!

Basically since all phones are Linux (only iPhone is Mac), you can use the same code for all phones too.

Ryzen 9 RX 6800M ■ 16GB XF8 Windows 11 ■
Ultra ■ LE 2.53DWS 5.6  Reaper ■ C/C++ C# ■ Fortran 2008 ■ Story ■
■ Homepage: https://canardia.com ■

Link to comment
Share on other sites

I tried your C++ code with a Sleep(1) just after the printing. The result is increments of two milliseconds. still not the 16 everything else gets.

 

So, when I thought:

 

There must be a way, surely. If I put a Sleep(1) in my code, surely windows is not using a hi-res timer for that?

 

Sure enough, it really was only sleeping for 1 ms. Just a shame you have to jump through so many hoops to count 1 ms yourself.

 

 

Still, I guess we should all be thanking Josh. If he hadn't asked this question, would this discovery have been posted? And of course obviously to Lumooja as well, for actually finding this out...

LE Version: 2.50 (Eventually)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...