We're currently producing all these warnings that would be nice to get rid off. I am not sure if there are any performance implications actually. Maybe you who've worked more with numba know if that's the case (@mathurinm, @Klopfe)? (In which case we really need to do something about it.)
/home/gerd-jln/research/slopecd/code/slope/solvers/hybrid.py:71: NumbaPerformanceWarning: '@' is fa
ster on contiguous arrays, called on (array(float64, 1d, A), array(float64, 1d, A))
L_archive[k] = (X_reduced[:, k].T @ X_reduced[:, k]) / n_samples
/home/gerd-jln/.pyenv/versions/3.10.2/lib/python3.10/site-packages/numba/core/typing/npydecl.py:913
: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float64, 1d, A), ar
ray(float64, 1d, A))
warnings.warn(NumbaPerformanceWarning(msg))
/home/gerd-jln/research/slopecd/code/slope/solvers/hybrid.py:125: NumbaPerformanceWarning: '@' is f
aster on contiguous arrays, called on (array(float64, 1d, A), array(float64, 1d, C))
x = c_old + sum_X @ R / (L_j * n_samples)
/home/gerd-jln/research/slopecd/code/slope/solvers/hybrid.py:142: NumbaPerformanceWarning: '@' is f
aster on contiguous arrays, called on (array(float64, 1d, A), array(float64, 1d, A))
n_c = update_cluster(
/home/gerd-jln/.pyenv/versions/3.10.2/lib/python3.10/site-packages/numba/core/typing/npydecl.py:913
: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float64, 1d, A), ar
ray(float64, 1d, C))
warnings.warn(NumbaPerformanceWarning(msg))
One solution is of course to write our own dot-product for loop, but that seems somewhat crude (and maybe we shoot ourselves in the foot if there are additional optimization tricks for dot products that we fail to replicate.
We're currently producing all these warnings that would be nice to get rid off. I am not sure if there are any performance implications actually. Maybe you who've worked more with numba know if that's the case (@mathurinm, @Klopfe)? (In which case we really need to do something about it.)
One solution is of course to write our own dot-product for loop, but that seems somewhat crude (and maybe we shoot ourselves in the foot if there are additional optimization tricks for dot products that we fail to replicate.