f



time in milliseconds by calling time.time()

I am trying to measure some system response time by using the time.time
() or time.clock() in my script.  However, the numbers I get are in
10s of milliseconds.
For example,
1248481670.34   #from time.time()
0.08                   #from time.clock()

That won't work for me, since the response time may be only a few
milliseconds.
My environment is Solaris 10 with Python 2.4.4 (#7, Feb  9 2007,
22:10:21).

SunOS 5.10 Generic_137112-07 i86pc i386 i86pc


The tricky thing is, if I run the python interpreter and import the
time module, I can get a time floating number in better precision by
calling time.time().  Do you guys have any suggestion on debugging
this problem?  Or, is there any other module I can try?  Thanks.

$ python
Python 2.4.4 (#7, Feb  9 2007, 22:10:21)
[GCC 3.4.6] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
>>> time.time()
1248481930.8023829  <--I like this!
>>> time.clock()
0.0
>>>
0
7/25/2009 12:39:04 AM
comp.lang.python 77058 articles. 6 followers. Post Follow

3 Replies
1875 Views

Similar Articles

[PageSpeed] 5

In article 
<9c600f0c-f4a0-4e8c-bbb9-27f128aecc50@m7g2000prd.googlegroups.com>,
 "scriptlearner@gmail.com" <scriptlearner@gmail.com> wrote:

> I am trying to measure some system response time by using the time.time
> () or time.clock() in my script.  However, the numbers I get are in
> 10s of milliseconds.
> [...]
> The tricky thing is, if I run the python interpreter and import the
> time module, I can get a time floating number in better precision by
> calling time.time().
> [...]
> >>> time.time()
> 1248481930.8023829  <--I like this!

time.time() is returning a float in either case.  The difference you are 
seeing is purely related to how you are printing it; executing a "print" 
statement as opposed to the interactive interpreter printing a value.

Notice:

Roy-Smiths-MacBook-Pro:play$ python
Python 2.5.1 (r251:54863, Feb  6 2009, 19:02:12) 
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
>>> print time.time()
1248484949.75
>>> time.time()
1248484957.151274

and further notice:

>>> x = time.time()
>>> str(x)
'1248485028.58'
>>> repr(x)
'1248485028.5814769'

Keep in mind that while a float may have a large apparent precision, 
there's no promise that the actual value returned by the OS has that much 
precision.  You should be fine if all you're looking for is ms, but I 
wouldn't count on much more than that.
0
roy (2295)
7/25/2009 1:27:10 AM
On Sat, Jul 25, 2009 at 3:27 AM, Roy Smith<roy@panix.com> wrote:

> Keep in mind that while a float may have a large apparent precision,
> there's no promise that the actual value returned by the OS has that much
> precision. =C2=A0You should be fine if all you're looking for is ms, but =
I
> wouldn't count on much more than that.

Even stronger: I wouldn't count on _anything_ more than that. On my
machine, time.time() changes value once per millisecond. I tested this
by looking at a loop that recorded time.time() 100000 times. The total
time in the loop was 61 ms; out of the 100000 numbers, 61 were higher
than the previous one, with the highest difference being 1.00017 ms,
the lowest 0.999928 ms.

--=20
Andr=C3=A9 Engels, andreengels@gmail.com
0
7/25/2009 5:47:14 AM
On Sat, 25 Jul 2009 07:47:14 +0200, Andre Engels wrote:

> On Sat, Jul 25, 2009 at 3:27 AM, Roy Smith<roy@panix.com> wrote:
> 
>> Keep in mind that while a float may have a large apparent precision,
>> there's no promise that the actual value returned by the OS has that
>> much precision.  You should be fine if all you're looking for is ms,
>> but I wouldn't count on much more than that.
> 
> Even stronger: I wouldn't count on _anything_ more than that. On my
> machine, time.time() changes value once per millisecond. I tested this
> by looking at a loop that recorded time.time() 100000 times. The total
> time in the loop was 61 ms; out of the 100000 numbers, 61 were higher
> than the previous one, with the highest difference being 1.00017 ms, the
> lowest 0.999928 ms.



I'm guessing you're running Windows, because for Windows time.time() has 
a low resolution and time.clock() is the higher resolution timer.

I'm running Linux, which is the opposite:


>>> from time import time, clock
>>> def diffs(alist):
....     deltas = []
....     for i in xrange(1, len(alist)):
....             deltas.append( alist[i] - alist[i-1] )
....     return deltas
....
>>> d = [time() for i in xrange(10000)]  # grab raw times
>>> dt = diffs(d)  # calculate the difference between each call
>>> max(dt), min(dt)
(0.00060892105102539062, 1.9073486328125e-06)
>>>
>>> d = [clock() for i in xrange(10000)]  # and again using clock()
>>> dc = diffs(d)
>>> max(dc), min(dc)
(0.010000000000000009, 0.0)

More important than the maximum and minimum time deltas is the resolution 
of ticks in each timer. Under Linux, clock() hardly ever gets updated:

>>> len(dc)  # how many time deltas?
9999
>>> len(filter(None, dc))  # how many non-zero deltas?
2

while time() gets updated frequently:

>>> len(dt)
9999
>>> len(filter(None, dt))
9999



See also the comments in the timeit module.



-- 
Steven
0
steve9679 (1985)
7/25/2009 6:18:23 AM
Reply: