As a previous answer said, the rate at which jiffies
increments is fixed.
The standard way of specifying time for a function that accepts jiffies
is using the constant HZ
.
That's the abbreviation for Hertz, or the number of ticks per second. On a system with a timer tick set to 1ms, HZ=1000. Some distributions or architectures may use another number (100 used to be common).
The standard way of specifying a jiffies
count for a function is using HZ
, like this:
schedule_timeout(HZ / 10); /* Timeout after 1/10 second */
In most simple cases, this works fine.
2*HZ /* 2 seconds in jiffies */
HZ /* 1 second in jiffies */
foo * HZ /* foo seconds in jiffies */
HZ/10 /* 100 milliseconds in jiffies */
HZ/100 /* 10 milliseconds in jiffies */
bar*HZ/1000 /* bar milliseconds in jiffies */
Those last two have a bit of a problem, however, as on a system with a 10 ms timer tick, HZ/100
is 1, and the precision starts to suffer. You may get a delay anywhere between 0.0001 and 1.999 timer ticks (0-2 ms, essentially). If you tried to use HZ/200
on a 10ms tick system, the integer division gives you 0 jiffies!
So the rule of thumb is, be very careful using HZ for tiny values (those approaching 1 jiffie).
To convert the other way, you would use:
jiffies / HZ /* jiffies to seconds */
jiffies * 1000 / HZ /* jiffies to milliseconds */
You shouldn't expect anything better than millisecond precision.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…