Open lucaslanek opened 7 months ago
In GitLab by @aweinstein on May 29, 2024, 05:25
@julioRodino What happens with this? For the record, the formula is
undefined
So the formula should be np.exp(-np.power(X, 2) / float(self.sigma ** 2))
(Why the float
? Is np.power(X,2)
better than X**2
?)
In GitLab by @julioRodino on Jun 10, 2024, 13:08
Since we are uploading this repo. Should I change the kernel to what is in the formula you put above?
As to why float, is not necessary I just prefer to specify the type. It can be deleted.
np.power is supposedly more flexible than . Though is faster than np.power:
# %%
import numpy as np
X = np.random.normal(0, 1, (100, 3))
# %%
%timeit -n 1000 X ** 2.
res1 = X ** 2.
# %%
%timeit -n 1000 np.power(X, 2.)
res2 = np.power(X, 2.)
# %%
np.testing.assert_array_equal(res1, res2)
Results from **:
6.72 µs ± 338 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Results from np.power:
527 ns ± 68.4 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Here someone says that numpy is more precise though the results in the code above prove that the two resulting arrays are the same.
In GitLab by @julioRodino on Jun 10, 2024, 13:16
Interesting... if numbers have a higher precision, the arrays are different. I've fixed the random seed to make the results reproducible and the standard deviation is 0.01 instead of 1 in the previous example:
import numpy as np
np.random.seed(2)
X = np.random.normal(0, 0.01, (100, 3))
# %%
%timeit -n 1000 X ** 2.
res1 = X ** 2.
# %%
%timeit -n 1000 np.power(X, 2.)
res2 = np.power(X, 2.)
# %%
np.testing.assert_array_equal(res1, res2)
This should result in an error since the arrays are not equal. np.power should be preferred then over ** if we want precision over time.
In GitLab by @aweinstein on Jun 11, 2024, 05:49
In summary:
Shuman, D. I., Narang, S. K., Frossard, P., Ortega, A., & Vandergheynst, P. (2013). The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE signal processing magazine, 30(3), 83-98.
np.power
.float
. I think it is confusing and makes the sentences more verbose. Perhaps use can use self.sigma * 2.
(note the decimal point in the number two) to cast self.sigma
into a float in case it is an integer.As a side note, if I'm reading your results correctly, np.power
is faster: 527 ns vs 6.72 µs .
In GitLab by @aweinstein on Jun 11, 2024, 05:58
@julioRodino I just found that there exists the function np.square
. Probably better than np.power(X,2)
just because is a little clearer.
In GitLab by @aweinstein on Jun 11, 2024, 06:12
Just for future reference, these are some links that discuss the difference between the different implementation details of power functions:
In GitLab by @julioRodino on Jun 14, 2024, 14:04
created branch 9-gaussian-kernel-transformation-in-nngraph
to address this issue
In GitLab by @julioRodino on Mar 21, 2024, 14:35
The formula should be applied as follows:
np.exp(-np.power(X, 2) / float(self.sigma * 2))
where X is any number or array. Instead, sigma is, currently, not multiplied by 2.