Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
401 views
in Technique[技术] by (71.8m points)

c - Computing time on Linux: granularity and precision

**********************Original edit**********************


I am using different kind of clocks to get the time on Linux systems:

rdtsc, gettimeofday, clock_gettime

and already read various questions like these:

But I am a little confused:


What is the difference between granularity, resolution, precision, and accuracy?


Granularity (or resolution or precision) and accuracy are not the same things (if I am right ...)

For example, while using the "clock_gettime" the precision is 10 ms as I get with:

struct timespec res;
clock_getres(CLOCK_REALTIME, &res):

and the granularity (which is defined as ticks per second) is 100 Hz (or 10 ms), as I get when executing:

 long ticks_per_sec = sysconf(_SC_CLK_TCK);

Accuracy is in nanosecond, as the above code suggest:

struct timespec gettime_now;

clock_gettime(CLOCK_REALTIME, &gettime_now);
time_difference = gettime_now.tv_nsec - start_time;

In the link below, I saw that this is the Linux global definition of granularity and it's better not to change it:

http://wwwagss.informatik.uni-kl.de/Projekte/Squirrel/da/node5.html#fig:clock:hw

So my question is If this remarks above were right, and also:

a) Can we see what is the granularity of rdtsc and gettimeofday (with a command)?

b) Can we change them (with any way)?


**********************Edit number 2**********************

I have tested some new clocks and I will like to share information:

a) In the page below, David Terei, did a fine program that compares various clock and their performances:

https://github.com/dterei/Scraps/tree/master/c/time

b) I have also tested omp_get_wtime as Raxman suggested by and I found a precision in nsec, but not really better than "clock_gettime (as they did in this website):

http://msdn.microsoft.com/en-us/library/t3282fe5.aspx

I think it's a Windows-oriented time function.

Better results are given with clock_gettime using CLOCK_MONOTONIC than when using CLOCK_REALTIME. That's normal, because the first calculates PROCESSING time and the other REAL TIME respectively

c) I have found also the Intel function ippGetCpuClocks, but not I've not tested it because it's mandatory to register first:

http://software.intel.com/en-us/articles/ipp-downloads-registration-and-licensing/

... or you may use a trial version

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
  • Precision is the amount of information, i.e. the number of significant digits you report. (E.g. I am 2 m, 1.8 m, 1.83 m, and 1.8322 m tall. All those measurements are accurate, but increasingly precise.)

  • Accuracy is the relation between the reported information and the truth. (E.g. "I'm 1.70 m tall" is more precise than "1.8 m", but not actually accurate.)

  • Granularity or resolution are about the smallest time interval that the timer can measure. For example, if you have 1 ms granularity, there's little point reporting the result with nanosecond precision, since it cannot possibly be accurate to that level of precision.

On Linux, the available timers with increasing granularity are:

  • clock() from <time.h> (20 ms or 10 ms resolution?)

  • gettimeofday() from Posix <sys/time.h> (microseconds)

  • clock_gettime() on Posix (nanoseconds?)

In C++, the <chrono> header offers a certain amount of abstraction around this, and std::high_resolution_clock attempts to give you the best possible clock.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...