I'm programming in python on windows and would like to accurately measure the time it takes for a function to run. I have written a function "time_it" that takes another function, runs it, and returns the time it took to run.
def time_it(f, *args):
start = time.clock()
f(*args)
return (time.clock() - start)*1000
i call this 1000 times and average the result. (the 1000 constant at the end is to give the answer in milliseconds.)
This function seems to work but i have this nagging feeling that I'm doing something wrong, and that by doing it this way I'm using more time than the function actually uses when its running.
Is there a more standard or accepted way to do this?
When i changed my test function to call a print so that it takes longer, my time_it function returns an average of 2.5 ms while the cProfile.run('f()') returns and average of 7.0 ms. I figured my function would overestimate the time if anything, what is going on here?
One additional note, it is the relative time of functions compared to each other that i care about, not the absolute time as this will obviously vary depending on hardware and other factors.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…