Closed MilesCranmer closed 2 years ago
Hi @MilesCranmer
I am not sure why the install script from the competition branch didn't work. Last time I tested it worked fine inside conda. It would seem that there is some interference on your system (maybe LD_LIBRARY_PATH
or something else). But the competition version had some bugs so I recommend trying a newer version anyway (unless you want that exact same version). Eve requires at least gcc-11 to build.
You could try running ldd
on the pyoperon dynamic library (that's the .so
file part of the python module) and then LD_PRELOAD
the libstdc++.so
version it expects to find, or follow the link chain pyoperon -> operon -> libstdc++ in the /nix/store). Note that you would need to add LD_PRELOAD
to your python invocation.
With nix, the cpp20
branch is known to work well as of yesterday. I've been working to simplify the nix experience.
The following flake.nix
file should just work. Just put in a folder of your choice and run nix develop
. Sometimes some already defined environment variables interfere with the nix dev env, if you run into issues run nix develop -i
instead. Note that -i
might wipe some env vars that your shell expects so you might lose colors etc but it should be otherwise perfectly functional. Because of this, try first without -i
.
I have added PySR to the python environment inside the nix shell, this will install the latest version from PyPI (remove from requirements if you don't want that). Adding more packages to that requirements list will make them available in the nix shell.
{
description = "jupyter";
inputs = {
flake-utils.url = "github:numtide/flake-utils";
nixpkgs.url = "github:nixos/nixpkgs/master";
pypi-deps-db.url = "github:DavHau/pypi-deps-db";
pyoperon.url = "github:heal-research/pyoperon/cpp20";
mach-nix = {
url = "github:DavHau/mach-nix";
inputs = {
nixpkgs.follows = "nixpkgs";
flake-utils.follows = "flake-utils";
pypi-deps-db.follows = "pypi-deps-db";
};
};
};
outputs = { self, flake-utils, mach-nix, nixpkgs, pypi-deps-db, pyoperon }:
flake-utils.lib.eachDefaultSystem (system:
let
python = "python39";
pkgs = import nixpkgs { inherit system; };
mach = import mach-nix { inherit pkgs python; };
pyop = pyoperon.packages.${system}.default;
pyEnv = mach.mkPython {
requirements = ''
jupyterlab
matplotlib
numpy
pandas
pmlb
scikit-learn
seaborn
pysr
'';
ignoreDataOutdated = true;
};
in {
devShell = mach.nixpkgs.mkShell {
buildInputs = [ pyEnv pyop ];
shellHook = ''
export PYTHONPATH=$PYTHONPATH:${pyop}
'';
};
});
}
Hope this works. Best, Bogdan
Hi Bogdan,
Thanks for the quick reply. I set up this flake.nix
file and ran nix develop -i
, but am now seeing the following issue:
warning: creating lock file '/dev/shm/build_operon/flake.lock'
warning: dumping very large path (> 256 MiB); this may run out of memory
error: builder for '/nix/store/sdidxfg1mvd55bqhckniwg63nnqv559j-python3-3.9.13-env.drv' failed with exit code 25;
last 1 log lines:
> error: collision between `/nix/store/aiafqijq19da1y4ir5hjh6gjghi6m1sc-python3.9-notebook-6.4.12/bin/jupyter-bundlerextension' and `/nix/store/l5bwkfm5283s71k5c2f5i03mgsh8f927-python3.9-nbclassic-0.4.3/bin/jupyter-bundlerextension'
For full logs, run 'nix log /nix/store/sdidxfg1mvd55bqhckniwg63nnqv559j-python3-3.9.13-env.drv'.
error: 1 dependencies of derivation '/nix/store/ixd0dd6fc0jv9mw6ca4b19pac220rbbd-python3-3.9.13-env.drv' failed to build
error: 1 dependencies of derivation '/nix/store/g332760vdhhy6lyn4a51yam3ap4323kh-nix-shell-env.drv' failed to build
I checked the log file and it just displays that issue about the collision.
Any idea what this is from?
Thanks! Miles
Ah, wait, I removed the jupyterlab
, seaborn
, and pysr
requirements, and now it works! (Not sure which one in particular was breaking things - although I wouldn't expect pysr to install successfully since it requires Julia too)
Thanks for the help! Best, Miles
By the way, how do I add common CLI tools like grep, sed, etc., to the nix file?
Thanks! Miles
Ah, wait, I removed the
jupyterlab
,seaborn
, andpysr
requirements, and now it works! (Not sure which one in particular was breaking things - although I wouldn't expect pysr to install successfully since it requires Julia too)Thanks for the help! Best, Miles
It's an upstream issue. replace jupyterlab
, with notebook
and you're set.
By the way, how do I add common CLI tools like grep, sed, etc., to the nix file?
In my shell sed, grep etc already exist, but you can try adding coreutils
to the buildInputs:
buildInputs = with pkgs; [ coreutils pyEnv pyop ];
Thanks!
By the way, how would I tweak regressor.py
in the srbench competition: https://github.com/cavalab/srbench/blob/2faf1fe54f73027225cd3333012393a25dae6917/official_competitors/operon/regressor.py#L39 to return a list of expressions, rather than a single expression? I am modifying srbench to take into account the full pareto front, rather than just a single expression (which I think makes the metric too sensitive to the selection function). Would I just call get_pareto_front
?
Thanks! Miles
I tried this:
import operon.pyoperon as op
def models(est, X=None):
names = X.columns.tolist() if isinstance(X, pd.DataFrame) else None
front = []
precision = 4
for (model, model_vars) in est.best_estimator_.pareto_front_:
names_map = { v.Hash : names[v.Index] for v in model_vars }
front.append(op.InfixFormatter.Format(model, names_map, precision))
return front
but it gives me the error:
ValueError: too many values to unpack (expected 2)
Hi,
Sorry, that's a bug in the wrapper code, I'll fix it. For now you can do this:
from operon import InfixFormatter, FitLeastSquares
decimal_precision = 3
for model, model_vars, model_vals, model_bic in reg.pareto_front_:
y_pred_train = reg.evaluate_model(model, X_train)
y_pred_test = reg.evaluate_model(model, X_test)
scale, offset = FitLeastSquares(y_pred_train, y_train)
y_pred_train = scale * y_pred_train + offset
y_pred_test = scale * y_pred_test + offset
variables = { v.Hash : v.Name for v in model_vars }
print(model_bic, model.Length, r2_score(y_train, y_pred_train), r2_score(y_test, y_pred_test), InfixFormatter.Format(model, variables, decimal_precision))
model
is the actual operon treemodel_vars
is the list of input variables used by the modelmodel_vals
is the list of fitness valuesmodel_bic
is the value of the bayesian information criterion (naively computed after wikipedia)Works, thanks!
A little more clarification:
reg.pareto_front_
gives the best (first) pareto front of the population. in the case of single objective it will contain a single individual (the best one). Duplicate individuals (w.r.t. fitness values) are removed from the population prior to the non-dominated sorting step. The epsilon
parameter of the SymbolicRegressor
controls the tolerance of the equality comparison, so the front will be most dense when epsilon=0
and less dense for larger values of epsilon. this controls how big the returned list of expressions will be. at the moment there is no "hall of fame" or other kind of reporting mechanism.
Thanks!
The way I am using this in the modified benchmark is that I want to see how often the output of models
contains the "true" equation as one of the elements (i.e., I will look at them by eye). This is to improve robustness vs only returning a single expression, which relies too strongly on the selection function.
There is another setting called symbolic_mode
that when set to True, turns off certain features (real-valued coefficient mutations and coefficient tuning) in an attempt to produce "nicer" more physical equations. It was part of the hyper parameter grid for the competition but I don't know how helpful it really was.
By the way, do you have any tips for speeding up builds with nix develop
? It seems to take 12 hours for me which is a bit expensive when debugging crashes.
Thanks, Miles
I've made a commit which should alleviate this issue (in the cpp20
branch - just do another nix flake update
before nix develop
). Alternatively, you can try the freshly released PyPI wheels https://pypi.org/project/pyoperon/ (the python module has been renamed to pyoperon - from pyoperon.sklearn import SymbolicRegressor
)
Thanks for setting up a PyPI. However, for some reason I can't pip install pyoperon
. I also tried downloading the .whl
file explicitly from pypi.org, but when I run pip install pyoperon-0.3.1...whl
, I see the following:
ERROR: pyoperon-0.3.1-cp310-cp310-manylinux_2_34_x86_64.whl is not a supported wheel on this platform.
Not sure it matters, but here is my setup:
> uname -a
Linux worker5166 5.4.203.1.fi #1 SMP Wed Jul 6 14:58:40 EDT 2022 x86_64 x86_64 x86_64 GNU/Linu
It's an AMD CPU if that matters.
It might be an ABI incompatibility, what does python -c "import platform; print(platform.libc_ver())"
give you?
> python -c 'import platform; print(platform.libc_ver())'
('glibc', '2.28')
Is it possible to generate wheels for general glibc?
By the way, I'm not sure what this error is, but I see this when I try to run nix develop
on the cpp20
branch:
error: flake 'git+file:///mnt/home/mcranmer/pyoperon' does not provide attribute 'devShells.x86_64-linux.devShell.x86_64-linux', 'packages.x86_64-linux.devShell.x86_64-linux', 'legacyPackages.x86_64-linux.devShell.x86_64-linux', 'devShell.x86_64-linux' or 'defaultPackage.x86_64-linux'
The master branch builds, but then I see that GLIBCXX issue.
Is it possible to generate wheels for general glibc?
I think in general, newer versions of glibc should support programs linked against older versions. According to pep-600 and manylinux it seems that 2.28
is a safe common denominator. I'll release some more wheels once I rebuild the dev environment with a lower glibc.
Great, thanks!
By the way, I'm not sure what this error is, but I see this when I try to run
nix develop
on thecpp20
branch:error: flake 'git+file:///mnt/home/mcranmer/pyoperon' does not provide attribute 'devShells.x86_64-linux.devShell.x86_64-linux', 'packages.x86_64-linux.devShell.x86_64-linux', 'legacyPackages.x86_64-linux.devShell.x86_64-linux', 'devShell.x86_64-linux' or 'defaultPackage.x86_64-linux'
The master branch builds, but then I see that GLIBCXX issue.
It works for me with nix version 2.10.3
. But you probably don't want a pyoperon dev shell unless you plan to hack on the code. Otherwise, the jupyter shell from above should be sufficient.
As a workaround you can try to replace devShells.default = pkgs.mkShell {
at line 76 in flake.nix with defaultShell = pkgs.mkShell {
$ nix flake show
git+file:///home/bogdb/projects/pyoperon
├───devShells
│ ├───aarch64-darwin
│ │ └───default: development environment 'nix-shell'
│ ├───aarch64-linux
│ │ └───default: development environment 'nix-shell'
│ ├───i686-linux
│ │ └───default: development environment 'nix-shell'
│ ├───x86_64-darwin
│ │ └───default: development environment 'nix-shell'
│ └───x86_64-linux
│ └───default: development environment 'nix-shell'
└───packages
├───aarch64-darwin
│ ├───default: package 'pyoperon'
│ ├───pyoperon-debug: package 'pyoperon'
│ └───pyoperon-generic: package 'pyoperon'
├───aarch64-linux
│ ├───default: package 'pyoperon'
│ ├───pyoperon-debug: package 'pyoperon'
│ └───pyoperon-generic: package 'pyoperon'
├───i686-linux
│ ├───default: package 'pyoperon'
│ ├───pyoperon-debug: package 'pyoperon'
│ └───pyoperon-generic: package 'pyoperon'
├───x86_64-darwin
│ ├───default: package 'pyoperon'
│ ├───pyoperon-debug: package 'pyoperon'
│ └───pyoperon-generic: package 'pyoperon'
└───x86_64-linux
├───default: package 'pyoperon'
├───pyoperon-debug: package 'pyoperon'
└───pyoperon-generic: package 'pyoperon'
Sorry, this is my first time using nix
- how would I tweak the flake.nix
file with the jupyter-only shell?
Changing devShells.default
to defaultShell
gave the same issue as before. My nix version is 2.5 - the highest one available in nix-portable
.
Rev e8af857 should fix this issue (I hope!). The flake.nix
with the jupyter-only shell takes pyoperon
as an input so you would just need to run nix flake update
again before nix develop
(in the jupyter shell). If issues still remain, I will be able to take a more thorough look at them tomorrow.
It builds, but for some reason I'm seeing this now. Maybe I'm doing something dumb?
> nix-portable nix develop -i --no-write-lock-file -c /bin/bash -c 'python -c "from operon.sklearn import SymbolicRegressor"'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'operon'
Maybe the flake.nix
on cpp20 doesn't set up PYTHONPATH
or add the reuqired packages?
Sorry it's my fault not being clear enough. For consistency reasons the import name is now pyoperon instead of operon, so it should be from pyoperon.sklearn import SymbolicRegressor
. This is due to convention in how wheels are packaged and named.
The jupyter flake should set the path correctly still.
No worries, thanks. Although unfortunately I'm now getting this error:
> nix-portable nix develop -i --no-write-lock-file -c /bin/bash -c 'python -c "from pyoperon.sklearn import SymbolicRegressor"'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/mnt/home/mcranmer/pyoperon/pyoperon/__init__.py", line 4, in <module>
from .pyoperon import *
ModuleNotFoundError: No module named 'pyoperon.pyoperon'
It seems like just the fact that I'm inside the repo dir, so I cd
'd away, but now I see it's still not picking up the PYTHONPATH for some reason:
> nix-portable nix develop -i --no-write-lock-file -c /bin/bash -c 'cd / && python -c "from pyoperon.sklearn import SymbolicRegressor"'
warning: Git tree '/mnt/home/mcranmer/pyoperon' is dirty
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'pyoperon'
It seems like just the fact that I'm inside the repo dir
That's one issue I failed to consider.
now I see it's still not picking up the PYTHONPATH for some reason:
The devShell defined in the flake.nix
from the pyoperon repo does not actually set the PYTHONPATH. This is by design as I just want a dev environment without installing the pyoperon module somewhere in the nix store (it would need to be built/installed to have something to put in PYTHONPATH). If you just want to consume the python module, then you are not meant to interact with this flake directly.
The other flake from one of my comments above defines a python environment in which pyoperon gets built and added to the PYTHONPATH. If you use that one then you shouldn't have issues. I know nix flakes can be very confusing at first but they really are a superior alternative to other packaging schemes. The python wheels should be ready as well, as soon as the entire gcc toolchain and dependencies finish building against glibc-2.28 :)
Thanks, I tried the other flake you sent. The PYTHONPATH
seems to correctly point to some /nix/store/7x3qkhg1nvcd79g3dzc0kmnqrlb3fwys-pyoperon/
.
However, I still seem to get the same issue:
>>> import pyoperon.sklearn
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pyoperon.sklearn'
Any idea?
Thanks, I am definitely interested in learning nix
after I get this and some other projects out of the way - seems like a nice alternative to conda
.
sorry, seems to be a bug on my side, the .so file simply gets copied in the wrong place.
quick fix:
mkdir -p /tmp/pyoperon/pyoperon
cp /nix/store/7x3qkhg1nvcd79g3dzc0kmnqrlb3fwys-pyoperon/*.py /tmp/pyoperon/pyoperon
cp /nix/store/7x3qkhg1nvcd79g3dzc0kmnqrlb3fwys-pyoperon/pyoperon/*.so /tmp/pyoperon/pyoperon
export PYTHONPATH=/tmp/pyoperon
(replace with your actual nix store path)
$ ls /tmp/pyoperon/pyoperon/
__init__.py __pycache__ pyoperon.cpython-39-x86_64-linux-gnu.so sklearn.py
after this all the imports work correctly. this will all be fixed in the next release. unfortunately packaging cpython modules and mixing together two separate build systems (Cmake, setuptools/poetry) is kind of a mess.
EDIT: fixed with 88057042
pyoperon-0.3.2 released on PyPI, tested with a fresh ubuntu/conda image, works with no issues here
Thanks for working on this. Sorry to report this but the new PyPI, while it does now actually install, seems to also complain about GLIBC_2.29 - even though it specifies 2.28 in the filename:
> python evaluate_method.py --method operon --dataset rydberg --test --seed=1 --version=999
Traceback (most recent call last):
File "/mnt/home/mcranmer/pysr_paper_syw/srbench-comp/evaluate_method.py", line 38, in <module>
from regressor import est, eval_kwargs
File "/mnt/home/mcranmer/pysr_paper_syw/srbench-comp/official_competitors/operon/regressor.py", line 1, in <module>
from pyoperon.sklearn import SymbolicRegressor
File "/mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/__init__.py", line 4, in <module>
from .pyoperon import *
ImportError: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so)
Here's the output of running ldd
on that shared library file:
/mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so)
/mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so)
/mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so)
/mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so: /mnt/sw/nix/store/kcrf6n4dmr5blhw2hzfy1j588bri8dzw-gcc-10.3.0/lib64/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so)
/mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so: /mnt/sw/nix/store/kcrf6n4dmr5blhw2hzfy1j588bri8dzw-gcc-10.3.0/lib64/libstdc++.so.6: version `CXXABI_1.3.13' not found (required by /mnt/home/mcranmer/venvs/operon/lib/python3.10/site-packages/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so)
linux-vdso.so.1 (0x0000155555552000)
libpython3.10.so.1.0 => /mnt/sw/nix/store/963ciri6rlr0ixw4ikqib5vdsnf24vb4-python-3.10.4-view/lib/libpython3.10.so.1.0 (0x0000155554d6e000)
libstdc++.so.6 => /mnt/sw/nix/store/kcrf6n4dmr5blhw2hzfy1j588bri8dzw-gcc-10.3.0/lib64/libstdc++.so.6 (0x000015555499a000)
libm.so.6 => /lib64/libm.so.6 (0x0000155554618000)
libgcc_s.so.1 => /mnt/sw/nix/store/kcrf6n4dmr5blhw2hzfy1j588bri8dzw-gcc-10.3.0/lib64/libgcc_s.so.1 (0x0000155554400000)
libc.so.6 => /lib64/libc.so.6 (0x000015555403b000)
/lib64/ld-linux-x86-64.so.2 (0x0000155555326000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x0000155553e12000)
libintl.so.8 => /mnt/sw/nix/store/4qdr42c3a1ah0cy0nzq5689clc3zxy0y-gettext-0.21/lib/libintl.so.8 (0x0000155553c07000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00001555539e7000)
libdl.so.2 => /lib64/libdl.so.2 (0x00001555537e3000)
libutil.so.1 => /lib64/libutil.so.1 (0x00001555535df000)
libiconv.so.2 => /mnt/sw/nix/store/35hr6qgrcbw7xr9lmhqwijyl4wb9fzdy-libiconv-1.16/lib/libiconv.so.2 (0x00001555532e3000)
Edit: fixed output with modules I usually use.
On the bright side, the nix develop
strategy now seems to work! Thanks for fixing it.
It's a little curious that I see there a mix of system libraries from /lib64
and nix libraries from the nix store. Are you somehow mixing pip/conda and nix?
Here's the linkage of the .so
file that is actually included in the wheel:
$ ldd pyoperon-0.3.2/pyoperon/pyoperon.cpython-310-x86_64-linux-gnu.so
linux-vdso.so.1 (0x00007ffe67bf9000)
libpython3.10.so.1.0 => /nix/store/01z0011c5ad7rylgn4srvk5xirfc1n0h-python3-3.10.6/lib/libpython3.10.so.1.0 (0x00007fcdfce00000)
libstdc++.so.6 => /nix/store/3ii85jsjaim6mbpl6r1d1q1447n2xvm7-gcc-11.3.0-lib/lib/libstdc++.so.6 (0x00007fcdfca00000)
libm.so.6 => /nix/store/wmkxak8sjl81bg97g5nxxd2ad29gckfh-glibc-2.28/lib/libm.so.6 (0x00007fcdfd2c8000)
libgcc_s.so.1 => /nix/store/wmkxak8sjl81bg97g5nxxd2ad29gckfh-glibc-2.28/lib/libgcc_s.so.1 (0x00007fcdfd2ae000)
libc.so.6 => /nix/store/wmkxak8sjl81bg97g5nxxd2ad29gckfh-glibc-2.28/lib/libc.so.6 (0x00007fcdfc600000)
/nix/store/wmkxak8sjl81bg97g5nxxd2ad29gckfh-glibc-2.28/lib64/ld-linux-x86-64.so.2 (0x00007fcdfd561000)
libdl.so.2 => /nix/store/wmkxak8sjl81bg97g5nxxd2ad29gckfh-glibc-2.28/lib/libdl.so.2 (0x00007fcdfd2a7000)
libcrypt.so.1 => /nix/store/wmkxak8sjl81bg97g5nxxd2ad29gckfh-glibc-2.28/lib/libcrypt.so.1 (0x00007fcdfd26d000)
Here's the linkage of the .so
file on ubuntu-21.10
$ ldd /home/ubuntu/miniconda3/lib/python3.9/site-packages/pyoperon/pyoperon.cpython-39-x86_64-linux-gnu.so
linux-vdso.so.1 (0x00007ffc687bd000)
libpython3.9.so.1.0 => /lib/x86_64-linux-gnu/libpython3.9.so.1.0 (0x00007f8f2b400000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f8f2b000000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f8f2ba41000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f8f2ba27000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8f2ac00000)
/lib64/ld-linux-x86-64.so.2 (0x00007f8f2bce6000)
libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f8f2b9f5000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f8f2b9d9000)
The nix libraries you see on ldd
are from my institute's cluster's shared software - the admin uses nix
to build all the modules that people can load. These should not interfere with the ones in my local nix builds though. e.g., my LD_LIBRARY_PATH is:
/mnt/sw/nix/store/wkpj3y31xvn7vzlhfam52mdgf9gqv9mx-zsh-5.8/lib:/mnt/sw/nix/store/i4qqrrhx6cjsr6r2vl06h9fwbzc9qs8p-texlive-20210325/lib:/mnt/sw/nix/store/kj64nyasww6yns2a7ql87ck6ypamvngx-imagemagick-7.0.8-7/lib:/mnt/sw/nix/store/963ciri6rlr0ixw4ikqib5vdsnf24vb4-python-3.10.4-view/lib:/mnt/sw/nix/store/kcrf6n4dmr5blhw2hzfy1j588bri8dzw-gcc-10.3.0/lib64:/mnt/sw/nix/store/kcrf6n4dmr5blhw2hzfy1j588bri8dzw-gcc-10.3.0/lib:/mnt/sw/nix/store/h0ghc4ns7pjfw4hkdb9xwgvng5pib0kw-cudnn-8.2.4.15-11.4/lib64:/mnt/sw/nix/store/bdhdh478f6slibd9zpgmgw8grnqq78im-cuda-11.4.4/lib64:/mnt/sw/nix/store/f0ycdncw8dw4wlicnzm74lgv9c51rlg4-openblas-0.3.20/lib:/cm/shared/apps/slurm/current/lib64
which includes various modules I use - gcc, blas, python, texlive, zsh, cuda, cudnn, imagemagick.
I think you simply need a newer libstdc++
in path (from gcc-11 or later). This should fix the GLIBCXX_3.4.29
and CXXABI_1.3.13
version errors.
$ strings /nix/store/3ii85jsjaim6mbpl6r1d1q1447n2xvm7-gcc-11.3.0-lib/lib/libstdc++.so.6 | grep GLIBCXX_3.4.29
GLIBCXX_3.4.29
GLIBCXX_3.4.29
bash-5.1$ strings /nix/store/3ii85jsjaim6mbpl6r1d1q1447n2xvm7-gcc-11.3.0-lib/lib/libstdc++.so.6 | grep CXXABI_1.3.13
CXXABI_1.3.13
CXXABI_1.3.13
Hi @foolnotion,
I am trying to run the srbench suite for a paper I am writing on my code PySR, but I am having difficulty setting up operon. I tried the conda build script in srbench for operon at first but couldn't get it to work due to various issues when building eve - it seems like it is not able to interpret the
float_
and other customcategory::{type}
types meant, without them explicitly being labeled ascategory::float_
, etc. Not sure why this error occurs.I decided it would be too difficult to manually fix those issues, so yesterday I decided instead to try out nix using nix-portable since this looks like the recommended approach for building operon. I was able to set things up with nix-portable, and things build correctly on my cluster.
However, when I try to actually import operon, I see the following issue:
I took a look at https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html and found that I need
gcc 11.1+
loaded for this GLIB version, rather thangcc-10.3.0
. So, I didmodule unload gcc && module load gcc/11.2.0
to get the correct paths set up. Then I tried again, but now it gives me the following error:Any idea how to fix this? Thanks! Miles