Here's an example which issues the same warning:
import numpy as np
np.seterr(all='warn')
A = np.array([10])
a=A[-1]
a**a
yields
RuntimeWarning: overflow encountered in long_scalars
In the example above it happens because a
is of dtype int32
, and the maximim value storable in an int32
is 2**31-1. Since 10**10 > 2**32-1
, the exponentiation results in a number that is bigger than that which can be stored in an int32
.
Note that you can not rely on np.seterr(all='warn')
to catch all overflow
errors in numpy. For example, on 32-bit NumPy
>>> np.multiply.reduce(np.arange(21)+1)
-1195114496
while on 64-bit NumPy:
>>> np.multiply.reduce(np.arange(21)+1)
-4249290049419214848
Both fail without any warning, although it is also due to an overflow error. The correct answer is that 21! equals
In [47]: import math
In [48]: math.factorial(21)
Out[50]: 51090942171709440000L
According to numpy developer, Robert Kern,
Unlike true floating point errors (where the hardware FPU sets a
flag
whenever it does an atomic operation that overflows), we need to
implement the integer overflow detection ourselves. We do it on
the
scalars, but not arrays because it would be too slow to implement
for
every atomic operation on arrays.
So the burden is on you to choose appropriate dtypes
so that no operation overflows.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…