ngushchin / EntropicOTBenchmark

Entropic Optimal Transport Benchmark (NeurIPS 2023).
https://arxiv.org/abs/2306.10161
MIT License
18 stars 2 forks source link

requirements.txt is missing #3

Open stepelu opened 10 months ago

stepelu commented 10 months ago

The README suggests to

pip install -r requirements.txt

but the file seems not to have been included in this repository.

ngushchin commented 10 months ago

The requirements.txt file has been added to the repository.

stepelu commented 10 months ago

Thank you, however I am getting the following error:

ERROR: Cannot install -r requirements.txt (line 13), -r requirements.txt (line 4) and torch==2.0.0 because these package versions have conflicting dependencies.

The conflict is caused by:
    The user requested torch==2.0.0
    lightning 2.0.1.post0 depends on torch<4.0 and >=1.11.0
    torchvision 0.15.2 depends on torch==2.0.1

On a side note (that might be an issue on my side), I am experiencing issue with some packages (pillow, scikit_learn, ...) that needs to be built from source (on Ubuntu 20.04.6 LTS) in the required versions as the wheel seems not to be available.

How was the requirements.txt file generated?

Maybe it would be worth updating to latest, if it can be verified that the results of the benchmark stays consistent.

ngushchin commented 10 months ago

The requirements.txt was generated using pipreqs (https://pypi.org/project/pipreqs/).

I fixed the dependency errors in the new requirements.txt and verified that all these requirements can be downloaded and installed using "pip install -r requirementx.txt". As I mentioned in the requirements.txt, "lightning" was only used for the lightning version of the ENOT baseline, which we ended up not using for the evaluation (we used the pure pytorch version). For clarity and to resolve dependency issues, I have removed this version of ENOT.

In general, the benchmark code uses only basic math operations from torch and numpy. All benchmark parameters are downloaded from Google Drive. Therefore, it is not critical to have exactly the same version as long as numpy and torch perform math operations in the same way in all versions. There is plotting and metric code for some benchmark pairs in the mixtures_benchmark_visualization_eot.ipynb notebook. You can compare it with the output you get in your Python environment. The results should be the same if your torch/numpy version has the same random generators, and may be slightly different if not.