The answer marked correct on this page is actually not correct. That is not a valid way to write a benchmark because of JVM dead code elimination (DCE), on-stack replacement (OSR), loop unrolling, etc. Only a framework like Oracle's JMH micro-benchmarking framework can measure something like that properly. Read this post if you have any doubts about the validity of such micro benchmarks.
Here is a JMH benchmark for System.currentTimeMillis()
vs System.nanoTime()
:
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@State(Scope.Benchmark)
public class NanoBench {
@Benchmark
public long currentTimeMillis() {
return System.currentTimeMillis();
}
@Benchmark
public long nanoTime() {
return System.nanoTime();
}
}
And here are the results (on an Intel Core i5):
Benchmark Mode Samples Mean Mean err Units
c.z.h.b.NanoBench.currentTimeMillis avgt 16 122.976 1.748 ns/op
c.z.h.b.NanoBench.nanoTime avgt 16 117.948 3.075 ns/op
Which shows that System.nanoTime()
is slightly faster at ~118ns per invocation compared to ~123ns. However, it is also clear that once the mean error is taken into account, there is very little difference between the two. The results are also likely to vary by operating system. But the general takeaway should be that they are essentially equivalent in terms of overhead.
UPDATE 2015/08/25: While this answer is closer to correct that most, using JMH to measure, it is still not correct. Measuring something like System.nanoTime()
itself is a special kind of twisted benchmarking. The answer and definitive article is here.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…