Closed Sosnowsky closed 1 year ago
I can take a look at it after lunch
Changing the method to this indeed solved the issue, but would be best to know why it happens.
def _drain(self, t: NDArray) -> NDArray: if isinstance(self.t_drain, (int, float)): times = -(t - self.t_init) / self.t_drain return np.exp(times.astype(float)) return np.exp(-(t - self.t_init) / self.t_drain[np.newaxis, :, np.newaxis])
The code doesn't pass all the test, the following however does:
if isinstance(self.t_drain, (int, float)):
return np.exp(-(t - self.t_init) / float(self.t_drain))
Interestingly when I print out:
print(self.t_drain)
print(float(self.t_drain))
one receives:
10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
1e+100
It seems like numpy.exp struggles with to large integers.
To summarize the issue:
❯ python
Python 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.exp(-1e+100)
0.0
>>> np.exp(-10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000)
AttributeError: 'int' object has no attribute 'exp'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: loop of ufunc does not support argument 0 of type int which has no callable exp method
>>>
hmmm, but the exponent is very close to zero (as the exponent is divided by self.t_drain). Setting t_drain=1e100 is just the hack to simulate without parallel damping. Not sure if this makes any difference (that the exponent is very small, and not very large as you mentioned)
`sosno@sosno-HP-Elite-Dragonfly:~$ python3 Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information.
import numpy as np np.exp(1e-10) 1.0000000001 np.exp(1e-100) 1.0 np.exp(1e-1000) 1.0
`
I suspect that numpy doesn't always cast int
to float
which leads to this issue for large int
values
>>> type(1e100)
<class 'float'>
>>> type(1000000000000000000000000000000000000000000000000000000000)
<class 'int'>
Fixed with PR #62
I think the following code should run
` blob_sp = Blob( blob_id=0, blob_shape=BlobShapeImpl("gauss"), amplitude=1, width_prop=1, width_perp=1, velocity_x=1, velocity_y=1, pos_x=0, pos_y=6, t_init=0, t_drain=10**100)
x = 0 y = 0 times = np.arange(1, 5, 0.01)
mesh_x, mesh_y, mesh_t = np.meshgrid(x, y, times) blob_values = blob_sp.discretize_blob( x=mesh_x, y=mesh_y, t=mesh_t, periodic_y=True, Ly=10) `
However, it Throws TypeError. It can be fixed by adding .astype(float) on the drain function of Blob, but I am not sure why it throws here and not in the regular use case.