jdtuck / fdasrsf_python

elastic fda python code
http://research.tetonedge.net
BSD 3-Clause "New" or "Revised" License
51 stars 18 forks source link

ValueError: numpy.ndarray size changed #20

Closed vnmabus closed 1 year ago

vnmabus commented 2 years ago

I have the following error when importing optimum_reparamN2 in my automatic tests (https://github.com/GAA-UAM/scikit-fda/runs/4740232727):

ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject

I suspect that the error is related to the recent release of NumPy 1.22, but I don't know how to solve it. The full trace is shown below:

 skfda/__init__.py:37: in <module>
    from . import representation, datasets, preprocessing, exploratory, misc, ml, \
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:671: in _load_unlocked
    ???
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
    exec(co, module.__dict__)
skfda/preprocessing/__init__.py:1: in <module>
    from . import registration
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:671: in _load_unlocked
    ???
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
    exec(co, module.__dict__)
skfda/preprocessing/registration/__init__.py:9: in <module>
    from ._fisher_rao import ElasticRegistration, FisherRaoElasticRegistration
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:671: in _load_unlocked
    ???
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
    exec(co, module.__dict__)
skfda/preprocessing/registration/_fisher_rao.py:16: in <module>
    from ...exploratory.stats import fisher_rao_karcher_mean
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:671: in _load_unlocked
    ???
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
    exec(co, module.__dict__)
skfda/exploratory/__init__.py:2: in <module>
    from . import outliers
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:671: in _load_unlocked
    ???
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
    exec(co, module.__dict__)
skfda/exploratory/outliers/__init__.py:6: in <module>
    from ._outliergram import OutliergramOutlierDetector
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:671: in _load_unlocked
    ???
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
    exec(co, module.__dict__)
skfda/exploratory/outliers/_outliergram.py:8: in <module>
    from ..stats import modified_epigraph_index
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:671: in _load_unlocked
    ???
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
    exec(co, module.__dict__)
skfda/exploratory/stats/__init__.py:1: in <module>
    from ._fisher_rao import _fisher_rao_warping_mean, fisher_rao_karcher_mean
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:671: in _load_unlocked
    ???
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
    exec(co, module.__dict__)
skfda/exploratory/stats/_fisher_rao.py:7: in <module>
    from fdasrsf.utility_functions import optimum_reparam
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/fdasrsf/__init__.py:24: in <module>
    from .time_warping import fdawarp, align_fPCA, align_fPLS, pairwise_align_bayes, pairwise_align_functions
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/fdasrsf/time_warping.py:9: in <module>
    import fdasrsf.utility_functions as uf
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/fdasrsf/utility_functions.py:21: in <module>
    import optimum_reparamN2 as orN2
src/optimum_reparamN2.pyx:1: in init optimum_reparamN2
    ???
E   ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
spuyravaud commented 2 years ago

I have the same issue using scikit-fda.

vnmabus commented 2 years ago

Sorry for not updating this issue with my findings. Recent versions of numba place an upper bound in the numpy version. I don't know exactly why, but the installation process first installs the most recent version of numpy and compiles fdasrsf against it. Then, it finds that numba is not compatible with that version and downgrades it. Thus, fdasrsf is compiled against a new version of numpy but executed with another version, and that causes the error.

I still don't know why that problem in the dependency resolution method arises, but as a workaround you can force the installation of an older version of numba before the installation of scikit-fda using:

pip install numba==0.53
spuyravaud commented 2 years ago

@vnmabus What do you suggest to prevent this kind of bug from happening in the future? because the code was working then stoped working.

vnmabus commented 2 years ago

If you are deploying an application you should pin the version of your dependencies. Keep a requirements.txt file with the explicit versions that your application needs and you tested against. If you are developing a library, you need to be more flexible, though, and deal with a range of versions, which may include coding around deprecations or older bugs and waiting some time before you can use the new features.

Nevertheless, the problem stated here shouldn't happen. pip should be able to resolve all versions before compiling the code. Thus a bug in the installation process exists somewhere, either in pip itself (unlikely) or in the way the dependencies are being defined in one of the involved libraries. If anyone finds the reason why, it would be useful to know it.

fbarfi commented 2 years ago

tried the pip install numba==0.53 did not solve the issue. Getting same error in importing skfda

vnmabus commented 2 years ago

Did you do that BEFORE installing scikit-fda?

fbarfi commented 2 years ago

I completely uninstalled scikit-fda. Used pipi install numba=0.53 and then reinstalled scikit-fda. To no avail.

I should mention that I have been using it for many months with no problems before this latest hiccup.

Thanks.

Badredine Arfi

Professor of Political Science University of Florida Research Foundation Professor PhD in Theoretical Physics PhD in Political Science Department of Political Science, University of Florida Phone: (352) 273 2357 Email: @.**@.> Website: http://people.clas.ufl.edu/barfi/

From: Carlos Ramos Carreño @.> Date: Monday, February 28, 2022 at 10:18 AM To: jdtuck/fdasrsf_python @.> Cc: Arfi,Badredine @.>, Comment @.> Subject: Re: [jdtuck/fdasrsf_python] ValueError: numpy.ndarray size changed (Issue #20) [External Email]

Did you do that BEFORE installing scikit-fda?

— Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jdtuck_fdasrsf-5Fpython_issues_20-23issuecomment-2D1054361337&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=doYOnruZ1--vodqC_Vzv7UepO9f8DV7opKn8P32lulB85zhoTi97BoYeMBQajd8w&s=J7xJYdup0AUef82v2xbUHJtbYiLXOK1Z2BhVXqGy0U4&e=, or unsubscribehttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ARUHUPWSPCYZ2LDO4PTUYK3U5OG3BANCNFSM5LPQA75A&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=doYOnruZ1--vodqC_Vzv7UepO9f8DV7opKn8P32lulB85zhoTi97BoYeMBQajd8w&s=w1lb1uLigJ04YXDMlBws_Qmxwx05zLbOT4PvKpc_oyU&e=. Triage notifications on the go with GitHub Mobile for iOShttps://urldefense.proofpoint.com/v2/url?u=https-3A__apps.apple.com_app_apple-2Dstore_id1477376905-3Fct-3Dnotification-2Demail-26mt-3D8-26pt-3D524675&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=doYOnruZ1--vodqC_Vzv7UepO9f8DV7opKn8P32lulB85zhoTi97BoYeMBQajd8w&s=oHwmndiOnLDVdyKzbpCdhdiHa4mqVSXzLsFHTJB0c2o&e= or Androidhttps://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.github.android-26referrer-3Dutm-5Fcampaign-253Dnotification-2Demail-2526utm-5Fmedium-253Demail-2526utm-5Fsource-253Dgithub&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=doYOnruZ1--vodqC_Vzv7UepO9f8DV7opKn8P32lulB85zhoTi97BoYeMBQajd8w&s=26FtzaN90pmTBLLttJLH0mhPC5DEShii6XN_2yqyVVU&e=. You are receiving this because you commented.Message ID: @.***>

vnmabus commented 2 years ago

I could not reproduce the bug if I install numba==0.53 in a clean conda environment with Python 3.8 before installing fdasrsf (both with pip). If I install fdasrsf directly using pip the bug appears (thus, it seems that scikit-fda is uninvolved in the bug). Another option is to install fdasrsf from conda (using the conda-forge channel). That seems to work even with the latest version of numba.

fbarfi commented 2 years ago

I am using python 3.9.10. Is that the problem? I did not have a problem with it earlier.

I tried using the conda-forge but did not work. I am using an apple M1 machine. The Arm64 version is not ready yet on conda forge. I was able to use scikit-fda on this machine for months through after I installed it from the source. I did reinstall again from the source and it was installed successfully. However, I cannot import it.

From: Carlos Ramos Carreño @.> Date: Monday, February 28, 2022 at 10:55 AM To: jdtuck/fdasrsf_python @.> Cc: Arfi,Badredine @.>, Comment @.> Subject: Re: [jdtuck/fdasrsf_python] ValueError: numpy.ndarray size changed (Issue #20) [External Email]

I could not reproduce the bug if I install numba==0.53 in a clean conda environment with Python 3.8 before installing fdasrsf (both with pip). If I install fdasrsf directly using pip the bug appears (thus, it seems that scikit-fda is uninvolved in the bug). Another option is to install fdasrsf from conda (using the conda-forge channel). That seems to work even with the latest version of numba.

— Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jdtuck_fdasrsf-5Fpython_issues_20-23issuecomment-2D1054399144&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=cVq8F5WJs410FgAmcYItVLVHO4F-r25ZURr1TcWWbMFhCLqbi2UZ59udvXr1Ku96&s=IkoBmNss5X_RKbLrHi6Jm1ysMysHBhBjZ8PXtL26ESs&e=, or unsubscribehttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ARUHUPUM5U2IFM7ZECYANA3U5OLFTANCNFSM5LPQA75A&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=cVq8F5WJs410FgAmcYItVLVHO4F-r25ZURr1TcWWbMFhCLqbi2UZ59udvXr1Ku96&s=QnmNPYprrUfVRDwSaOeBKP-1VF3yOJTC1uGyATM52aA&e=. Triage notifications on the go with GitHub Mobile for iOShttps://urldefense.proofpoint.com/v2/url?u=https-3A__apps.apple.com_app_apple-2Dstore_id1477376905-3Fct-3Dnotification-2Demail-26mt-3D8-26pt-3D524675&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=cVq8F5WJs410FgAmcYItVLVHO4F-r25ZURr1TcWWbMFhCLqbi2UZ59udvXr1Ku96&s=ecQhCJqKUgZr-8NApnPIX9JeWGr1znAjmX9q9fwFA94&e= or Androidhttps://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.github.android-26referrer-3Dutm-5Fcampaign-253Dnotification-2Demail-2526utm-5Fmedium-253Demail-2526utm-5Fsource-253Dgithub&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=cVq8F5WJs410FgAmcYItVLVHO4F-r25ZURr1TcWWbMFhCLqbi2UZ59udvXr1Ku96&s=FV36cdlN3C05Qt4emlOHvDyyQunIch_FKheA5Pols3M&e=. You are receiving this because you commented.Message ID: @.***>

fbarfi commented 2 years ago

Great news!

Find solution:

Reinstalled numpy 1.22.2 through conda

Name Version Build Channel

numpy 1.22.2 py39h61a45d2_0 conda-forge

numpydoc 1.2 pypi_0 pypi

(base) ┌─(~/downloads/fdasrsf_p

Uninstalled completely scipy 1.7.3 and installed scipy 1.8.0

Name Version Build Channel

scipy 1.8.0 py39h5060c3b_1 conda-forge

skdfa working smoothly!

Thanks for your time.

From: Carlos Ramos Carreño @.> Date: Monday, February 28, 2022 at 10:55 AM To: jdtuck/fdasrsf_python @.> Cc: Arfi,Badredine @.>, Comment @.> Subject: Re: [jdtuck/fdasrsf_python] ValueError: numpy.ndarray size changed (Issue #20) [External Email]

I could not reproduce the bug if I install numba==0.53 in a clean conda environment with Python 3.8 before installing fdasrsf (both with pip). If I install fdasrsf directly using pip the bug appears (thus, it seems that scikit-fda is uninvolved in the bug). Another option is to install fdasrsf from conda (using the conda-forge channel). That seems to work even with the latest version of numba.

— Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jdtuck_fdasrsf-5Fpython_issues_20-23issuecomment-2D1054399144&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=cVq8F5WJs410FgAmcYItVLVHO4F-r25ZURr1TcWWbMFhCLqbi2UZ59udvXr1Ku96&s=IkoBmNss5X_RKbLrHi6Jm1ysMysHBhBjZ8PXtL26ESs&e=, or unsubscribehttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ARUHUPUM5U2IFM7ZECYANA3U5OLFTANCNFSM5LPQA75A&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=cVq8F5WJs410FgAmcYItVLVHO4F-r25ZURr1TcWWbMFhCLqbi2UZ59udvXr1Ku96&s=QnmNPYprrUfVRDwSaOeBKP-1VF3yOJTC1uGyATM52aA&e=. Triage notifications on the go with GitHub Mobile for iOShttps://urldefense.proofpoint.com/v2/url?u=https-3A__apps.apple.com_app_apple-2Dstore_id1477376905-3Fct-3Dnotification-2Demail-26mt-3D8-26pt-3D524675&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=cVq8F5WJs410FgAmcYItVLVHO4F-r25ZURr1TcWWbMFhCLqbi2UZ59udvXr1Ku96&s=ecQhCJqKUgZr-8NApnPIX9JeWGr1znAjmX9q9fwFA94&e= or Androidhttps://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.github.android-26referrer-3Dutm-5Fcampaign-253Dnotification-2Demail-2526utm-5Fmedium-253Demail-2526utm-5Fsource-253Dgithub&d=DwMCaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=1XpXvO1g5MsaYxUCK4mC2A&m=cVq8F5WJs410FgAmcYItVLVHO4F-r25ZURr1TcWWbMFhCLqbi2UZ59udvXr1Ku96&s=FV36cdlN3C05Qt4emlOHvDyyQunIch_FKheA5Pols3M&e=. You are receiving this because you commented.Message ID: @.***>

jdtuck commented 2 years ago

Sorry just getting caught up as somehow notifications got turned off, but yes this is the issue and has to do with install order and numba restricting numpy. I might just have to restrict numpy version further. Thoughts @vnmabus ?

vnmabus commented 2 years ago

I think this issue throws some light in the problem, and also points to an easy solution: https://github.com/pypa/pip/issues/9542. Apparently you can specify oldest-supported-numpy as a build dependency instead of numpy, as explained here: https://numpy.org/devdocs/user/depending_on_numpy.html?highlight=mean#build-time-dependency. As the oldest NumPy ABI is compatible with newer versions, this should fix the problem.

vnmabus commented 2 years ago

It still does not work, @jdtuck . It first installs oldest_supported_numpy successfully, but then installs a modern numpy version before compiling. I am not sure but maybe it is because you still have the setup_requires parameter in setup.py (and it has not been changed to oldest_supported_numpy). I would try to use only pyproject.toml to define the dependencies.

jdtuck commented 2 years ago

@vnmabus okay moved everything to pyproject.toml give that a try, just released to pip

vnmabus commented 2 years ago

This still does not work BUT the error now appears importing GPy (I don't think this was the case on the first iterations). Seeing the GPy project, they have numpy>=1.7 as a setup_requires instead of oldest-supported-numpy, so if NumPy is not installed they download the newest version and compile against that. Then an older version of NumPy is installed after the compilation (both because the numba dependency and because you mistakenly changed the runtime dependency to oldest-supported-numpy in the last PR), and as GPy was compiled against the newer one, the ABI is different. Maybe we should open a new issue in GPy.

vnmabus commented 2 years ago

I have opened https://github.com/SheffieldML/GPy/issues/974.

jdtuck commented 2 years ago

So I have not had this error on Linux or MacOS. Only on Windows where its having a problem with GPy. I think has to do its pulling down a wheel. We do need to open an issue and thank you for doing that.

jdtuck commented 2 years ago

I have opened SheffieldML/GPy#974.

not sure how fast they are going to respond as I provided a pull request, I use GP on the bayesian codes and really don't want to depend on all of scikit-learn....

vnmabus commented 2 years ago

The last commit is from 4 months ago, so I wouldn't expect that they merge it fast. I truly hope that the library is not unmaintained, as its predictions were better than scikit-learn ones.

mwilson221 commented 1 year ago

I installed fdasrsf today, and got the same error as above.

I'm only interested in elastic registration, is Gpy necessary for this?


ValueError Traceback (most recent call last) Input In [7], in <cell line: 1>() ----> 1 import fdasrsf as fs

File ~\anaconda3\lib\site-packages\fdasrsf__init__.py:24, in 20 # Here we can also check for specific Python 3 versions, if needed 22 del sys ---> 24 from .time_warping import fdawarp, align_fPCA, align_fPLS, pairwise_align_bayes, pairwise_align_functions 25 from .time_warping import pairwise_align_bayes_infHMC 26 from .plot_style import f_plot, rstyle, plot_curve, plot_reg_open_curve, plot_geod_open_curve, plot_geod_close_curve

File ~\anaconda3\lib\site-packages\fdasrsf\time_warping.py:9, in 7 import numpy as np 8 import matplotlib.pyplot as plt ----> 9 import fdasrsf.utility_functions as uf 10 import fdasrsf.bayesian_functions as bf 11 import fdasrsf.fPCA as fpca

File ~\anaconda3\lib\site-packages\fdasrsf\utility_functions.py:21, in 19 from joblib import Parallel, delayed 20 import numpy.random as rn ---> 21 import optimum_reparamN2 as orN2 22 import optimum_reparam_N as orN 23 import cbayesian as bay

File src\optimum_reparamN2.pyx:1, in init optimum_reparamN2()

ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject

jdtuck commented 1 year ago

How did you install it and it is required for some of the registration (Bayesian)s

mwilson221 commented 1 year ago

pip install fdasrsf

I don't think I want any bayesien alignment. I just want to calculate a distance matrix using the elastic metric.

jdtuck commented 1 year ago

I would install off the GitHub repo using python setup.py install. Also the bayesian method is one of the methods to compute the elastic distance, so its not really easy to break it apart. The problem you are seeing is not directed related to GPy as above, but you have an old numpy and are installing off of pip which is compiled for your system against a newer numpy. I would recommend using anaconda or installing straight from github to get around this problem.

mwilson221 commented 1 year ago

When I try to run setup.py, I get this error;

git clone https://github.com/jdtuck/fdasrsf_python.git cd fdasrsf_python python setup.py install

Traceback (most recent call last): File "C:\Users\micha\Documents\GitHub\Other\Python\fdasrsf\fdasrsf_python\setup.py", line 15, in import dp_build ModuleNotFoundError: No module named 'dp_build'

Which appears to be an R package? https://github.com/amashadihossein/dpbuild

I tried pip install dp_build, conda install dp_build and downloading directly from github. The github I linked to above doesn't have a setup.py

jdtuck commented 1 year ago

No its not an R package, it looks as though you don't have all the requirements installed, what version of python? also do you have all the requirements install from requirements.txt

mwilson221 commented 1 year ago

Python 3.9.12

Yes, I just checked that I have all of the requirements.

I was able to pip install, just not import.

jdtuck commented 1 year ago

I can't reproduce on my end what os? Also try pip install . locally from the checked out git repo.

mwilson221 commented 1 year ago

Windows 10

This seems to have worked:

git clone https://github.com/jdtuck/fdasrsf_python.git cd /fdasrsf_python pip install /fdasrsf_python/

Thanks

avivajpeyi commented 1 year ago

This still doenst work with the pip install version of fdasrsf + py3.9.17 :(

LOGS

``` ~/Documents/venvs via 🐍 v3.8.10 ❯ python3.9 -m venv pspline_psd ~/Documents/venvs via 🐍 v3.9.17 (pspline_psd) ❯ source psline_psd/bin/activate ~/Documents/venvs via 🐍 v3.9.17 (psline_psd) ❯ pip install fdasrsf Looking in indexes: https://pypi.org/simple, https://packagecloud.io/eugeny/tabby/pypi/simple Collecting fdasrsf Using cached fdasrsf-2.4.2.tar.gz (4.0 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting numpy Using cached numpy-1.25.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.3 MB) Collecting patsy Using cached patsy-0.5.3-py2.py3-none-any.whl (233 kB) Collecting six Using cached six-1.16.0-py2.py3-none-any.whl (11 kB) Collecting matplotlib Using cached matplotlib-3.7.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.6 MB) Collecting Cython Using cached Cython-3.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB) Collecting tqdm Using cached tqdm-4.66.1-py3-none-any.whl (78 kB) Collecting pyparsing Downloading pyparsing-3.1.1-py3-none-any.whl (103 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 103.1/103.1 kB 6.2 MB/s eta 0:00:00 Collecting joblib Using cached joblib-1.3.2-py3-none-any.whl (302 kB) Collecting cffi>=1.0.0 Using cached cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (441 kB) Collecting numba Using cached numba-0.57.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.6 MB) Collecting scipy Using cached scipy-1.11.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (36.5 MB) Collecting pycparser Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB) Collecting fonttools>=4.22.0 Using cached fonttools-4.42.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.5 MB) Collecting python-dateutil>=2.7 Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) Collecting kiwisolver>=1.0.1 Using cached kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.6 MB) Collecting cycler>=0.10 Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB) Collecting packaging>=20.0 Using cached packaging-23.1-py3-none-any.whl (48 kB) Collecting pillow>=6.2.0 Using cached Pillow-10.0.0-cp39-cp39-manylinux_2_28_x86_64.whl (3.4 MB) Collecting contourpy>=1.0.1 Using cached contourpy-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (300 kB) Collecting importlib-resources>=3.2.0 Using cached importlib_resources-6.0.1-py3-none-any.whl (34 kB) Collecting pyparsing Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB) Collecting llvmlite<0.41,>=0.40.0dev0 Using cached llvmlite-0.40.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (42.1 MB) Collecting numpy Using cached numpy-1.24.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB) Collecting zipp>=3.1.0 Using cached zipp-3.16.2-py3-none-any.whl (7.2 kB) Building wheels for collected packages: fdasrsf Building wheel for fdasrsf (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for fdasrsf (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [14 lines of output] not modified: 'build/_DP.c' generating build/_DP.c (already up-to-date) Compiling src/optimum_reparamN2.pyx because it changed. [1/1] Cythonizing src/optimum_reparamN2.pyx src/dp_grid.c: In function ‘dp_costs’: src/dp_grid.c:31:19: warning: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare] 31 | for ( i=0; i

Interestingly, this worked with py3.8

LOGS

``` ~/Documents/venvs via 🐍 v3.8.10 (pspline_psd) ❯ pip install fdasrsf Looking in indexes: https://pypi.org/simple, https://packagecloud.io/eugeny/tabby/pypi/simple Processing /home/avaj040/.cache/pip/wheels/99/ce/2b/eade834b2834de5ff5a389e231d8a884aa9932cbb2c160dbda/fdasrsf-2.4.2-cp38-cp38-linux_x86_64.whl Collecting tqdm Using cached tqdm-4.66.1-py3-none-any.whl (78 kB) Collecting Cython Using cached Cython-3.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB) Collecting pyparsing Using cached pyparsing-3.1.1-py3-none-any.whl (103 kB) Collecting patsy Using cached patsy-0.5.3-py2.py3-none-any.whl (233 kB) Collecting joblib Using cached joblib-1.3.2-py3-none-any.whl (302 kB) Collecting cffi>=1.0.0 Using cached cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (442 kB) Collecting scipy Using cached scipy-1.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.5 MB) Collecting six Using cached six-1.16.0-py2.py3-none-any.whl (11 kB) Collecting numpy Using cached numpy-1.24.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB) Collecting matplotlib Using cached matplotlib-3.7.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (9.2 MB) Collecting numba Using cached numba-0.57.1-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.6 MB) Collecting pycparser Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB) Collecting packaging>=20.0 Using cached packaging-23.1-py3-none-any.whl (48 kB) Collecting cycler>=0.10 Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB) Collecting fonttools>=4.22.0 Downloading fonttools-4.42.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.6 MB) |████████████████████████████████| 4.6 MB 18.1 MB/s Collecting pillow>=6.2.0 Using cached Pillow-10.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.3 MB) Collecting kiwisolver>=1.0.1 Using cached kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.2 MB) Collecting python-dateutil>=2.7 Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) Collecting importlib-resources>=3.2.0; python_version < "3.10" Using cached importlib_resources-6.0.1-py3-none-any.whl (34 kB) Collecting contourpy>=1.0.1 Using cached contourpy-1.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (300 kB) Collecting llvmlite<0.41,>=0.40.0dev0 Using cached llvmlite-0.40.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (42.1 MB) Collecting importlib-metadata; python_version < "3.9" Using cached importlib_metadata-6.8.0-py3-none-any.whl (22 kB) Collecting zipp>=3.1.0; python_version < "3.10" Using cached zipp-3.16.2-py3-none-any.whl (7.2 kB) ERROR: matplotlib 3.7.2 has requirement pyparsing<3.1,>=2.3.1, but you'll have pyparsing 3.1.1 which is incompatible. Installing collected packages: tqdm, Cython, pyparsing, numpy, six, patsy, joblib, pycparser, cffi, scipy, packaging, cycler, fonttools, pillow, kiwisolver, python-dateutil, zipp, importlib-resources, contourpy, matplotlib, llvmlite, importlib-metadata, numba, fdasrsf Successfully installed Cython-3.0.0 cffi-1.15.1 contourpy-1.1.0 cycler-0.11.0 fdasrsf-2.4.2 fonttools-4.42.0 importlib-metadata-6.8.0 importlib-resources-6.0.1 joblib-1.3.2 kiwisolver-1.4.4 llvmlite-0.40.1 matplotlib-3.7.2 numba-0.57.1 numpy-1.24.4 packaging-23.1 patsy-0.5.3 pillow-10.0.0 pycparser-2.21 pyparsing-3.1.1 python-dateutil-2.8.2 scipy-1.10.1 six-1.16.0 tqdm-4.66.1 zipp-3.16.2 ```

jdtuck commented 1 year ago

Yes it does on multiple systems and compilers. What version of GCC are you using. Also this is a build problem isolated to your system need more info.