Closed rgommers closed 4 years ago
Here's another report from a week ago that also says "removing ld
solves the issue: https://github.com/conda/conda/issues/6030
There is a tricky issue.
We include compiler_compat/ld
so that users can use their system compiler to build a Python extension and link against libraries provided by conda packages. The system linker may not be new enough to understand the relocation present in these libraries, for example see @msarahan's comment in conda/conda#6030
On the other hand newer system compilers can produce objects that cannot be understood by compiler_compat/ld
.
@msarahan, @mingwandroid and I have been talking about the best approach to take here. One option we have been considering is to examine the system linker and use it if it is newer than the compiler_compat/ld
version.
I'm looking at this now. Another option is to put this linker binary into a separate package that gets recreated each time we update our compilers, then we get on the compiler-rebuild treadmill.
That treadmill isn't too bad, and in fact we've been built GCC 8 and GCC 9 packages at Anaconda already but due to gfortran ABI incompatibilities we've been unable to use them.
@rgommers, this is the sharp end of the interface between conda software and the rest of the software world, for sure. Can I ask you though, how common is this, and why is it common? Do you feel it should be? I certainly do not!
Note, I'm not suggesting we shouldn't do our best here, but there will be costs.
To be clear, if people are linking C/C++ Python extension modules using the Anaconda Distribution (or conda-forge) Pythons, then they should be also using our C/C++ compilers.
I believe that is a reasonable statement.
Hi @rgommers and @conda-forge/core (why does this not work?!), @jjhelmus, what do you think of this (it is very much WIP)?
#!/usr/bin/env bash
function ld_fn()
{
if [[ ${CC} =~ .*conda.* ]]; then
# Using conda compilers. Good.
$(dirname "${BASH_SOURCE[0]}")/ld.bin "$@"
else
local _GCC=gcc
if [[ -n ${CC} ]]; then
_GCC="${CC}"
fi
# Is it just GCC being too new? Not really. It is that GCC is
# compressing its debugging sections and our linker knows nothing
# about that. Can we detect whether GCC does that or not instead?
local GCC_SYSV=$("${_GCC}" --version)
GCC_SYSV=${GCC_SYSV##* }
if [[ ${GCC_SYSV} =~ 1[.].* ]] ||
[[ ${GCC_SYSV} =~ 2[.].* ]] ||
[[ ${GCC_SYSV} =~ 3[.].* ]] ||
[[ ${GCC_SYSV} =~ 4[.].* ]] ||
[[ ${GCC_SYSV} =~ 5[.].* ]] ||
[[ ${GCC_SYSV} =~ 6[.].* ]] ||
[[ ${GCC_SYSV} =~ 7[.].* ]]; then
$(dirname "${BASH_SOURCE[0]}")/ld.bin "$@"
else
"${_GCC}" "$@"
fi
fi
}
_COMPILER_COMPAT_LD_ARGS=( "$@" )
ld_fn "${_COMPILER_COMPAT_LD_ARGS[@]}"
My main concern with this approach is if anyone attempts to load ld
as a binary file or to execute it in some weird way then things could fail. Clearly it needs testing! The other alternative is for us to churn out compilers at a constant rate and split ld-compat/ld into a separate package created at each compiler rebuild, still our old Pythons would remain incompatible with newer systems (and maybe become incompatible at update-time) for people pushing at this compatibility boundary .. instead of using our compilers.
To test, rename the existing binary to ld.bin
and save this as a new ld
and set it to be executable.
If anyone feels this dynamic behaviour switch could/should be moved to distutils
then please shout!
Oh @rgommers, since conda-forge is considered our upstream and filing the issue here seems to prevent me from pinging some people, would it be OK for me to move this issue to conda-forge/python-feedstock where it will get more community visibility?
@mingwandroid I like the bash script to dynamically detect the version of gcc and delegate to the appropriate linker.
What do you think about implementing this check in the _sysconfigdata..py
file? Some Python logic could examine the CC variable and the gcc version and either load or modify the build_time_vars
dictionary.
I think the shell script option is more straightforward but wanted to offer an alternative.
Either method should include a override to select either the system or compiler_compat linked for debugging purposed. Adding a check for a _CONDA_PYTHON_USE_VENDORED_LD
environment variable or something similar seems like an option.
@conda-forge/python for visibility
These are all good ideas @jjhelmus, thank you. I will go ahead and implement it in sysconfigdata
.
If you want to check out what the shell script version looks like it will be available as a package fairly soon via:
conda install -c rdonnelly python=3.7.4
.. the recipe is here:
[https://github.com/AnacondaRecipes/python-feedstock/tree/master-3.7.14/recipe](https://github.com/AnacondaRecipes/python-feedstock/tree/master-3.7.14/recipe)
My version checking in this shell script version is not good btw.
I also wonder whether we should be more feature-oriented in our decision making here? It's some compression format on the .debug_info
that we are missing. Should that be the discriminator instead? .. and in future when we step out of cutting-edge modernity due to some other feature(s) should we add detection for those too or just go with the "use the system ld when system GCC is newer (and LD is not set, I suppose!)" approach?
I do not mind implementing this feature in both places too, so we can provide more workarounds, switchable via env. vars.
The other alternative is for us to churn out compilers at a constant rate and split ld-compat/ld into a separate package created at each compiler rebuild, still our old Pythons would remain incompatible with newer systems (and maybe become incompatible at update-time) for people pushing at this compatibility boundary .. instead of using our compilers.
I like this as I don't like the fact that python package is vendoring ld
. This essentially makes the conda package for python GPL.
It seems my test python 3.7.4 package with the 'ld as shell script' approach works OK in @rgommers test-case.
A test package which affects these changes in sysconfigdata
is now also available on the rdonnelly
channel (so build #0 for shell script, #1 for sysconfigdata
).
This approach works OK in @rgommers test-case too.
Sorry for the slow reply, I was away for a few days. And thanks for the detailed answers!
would it be OK for me to move this issue to conda-forge/python-feedstock where it will get more community visibility?
of course, fine with me
To be clear, if people are linking C/C++ Python extension modules using the Anaconda Distribution (or conda-forge) Pythons, then they should be also using our C/C++ compilers.
I believe that is a reasonable statement.
I don't think it is. I personally am using the conda compilers by default on one of my machines, because I want to make sure things work as expected for numpy.distutils
. But the conda compilers definitely still are a niche thing with some rough edges (e.g. numpy 1.16.1-1.16.2 don't work at all, pytorch build warns due to the odd custom compiler name, etc.). Having a standard Linux box with gcc 7/8/9, installing Anaconda (which doesn't contain the compilers by default), and then having a standard python setup.py install
for any package with a C extension fail unless you do conda install gcc_linux-64
first doesn't seem that reasonable.
It seems my test python 3.7.4 package with the 'ld as shell script' approach works OK in @rgommers test-case.
thanks, I'll try this!
Having a standard Linux box with gcc 7/8/9, installing Anaconda (which doesn't contain the compilers by default), and then having a standard
python setup.py install
for any package with a C extension fail unless you doconda install gcc_linux-64
first doesn't seem that reasonable.
We want this to work as as well but we also need to support the conda provided compilers and systems with older compilers. Having python setup.py install
work in all three of these cases is challenging given how distutils loads configuration information, the ever changing nature of compatibility in gcc and binutils and the variety of Linux distributions where conda is installed. Our recommendation is always going to be to use the conda provided compilers because these let us remove a large portion of this variability.
@mingwandroid's modifications in the test packages has moved this compatibility forward to a point where I think compiling C extensions should "just work" in the majority of cases but we are always looking for contributions which will further improve this compatibility.
Hi @rgommers,
No problem at all on the reply. There's always plenty to keep us occupied.
I don't think it is. I personally am using the conda compilers by default on one of my machines, because I want to make sure things work as expected for numpy.distutils
But the conda compilers definitely still are a niche thing with some rough edges (e.g. numpy 1.16.1-1.16.2 don't work at all, pytorch build warns due to the odd custom compiler name, etc.).
Bug reports and/or links to them would be appreciated, sorry if these exist and I've missed them.
They are in nearly all regards, common-or-garden cross-compilers. In fact, the decision to do this was, I believe a good thing for software projects that care about having well crafted, portable and capable build system metadata (and/or scripts depending on the exact system). That other tools or libs need some adjustment is often synonymous with them needing to be fixed for cross-compilation in general, when that isn't the case (we have one point of difference on Linux from common-or-garden but it is minor [1]) we will try to make the build system accomodate us. I think the CMake team may be considering adding support for our compilers, for example. I also believe our compilers (now largely being progressed by @isuruf thankfully) to be fundamental to a 'better' way of computing than traditional Linux, flatpak or snap (not to mention the other OSes) though I'm not going to try to sell you those arguments here!
Having a standard Linux box with gcc 7/8/9, installing Anaconda (which doesn't contain the compilers by default), and then having a standard python setup.py install for any package with a C extension fail unless you do conda install gcc_linux-64 first doesn't seem that reasonable.
Do you agree with my claim that since we use shared libraries (and cannot and should not budge on that! Security matters, static linking is insecure, and inefficient) and out of necessity we need newer versions of Fortran, C++ and GCC than our minimum supported OSes provide by default (and they will dynamically link to/autoload their transitive system dependencies too, where they too will get loaded preferentially, except when SONAME
has been mangled by some tool to prevent that particular strcmp()
- or whatever - from returning 0
) we are forced to build new compilers and new language runtime libraries to go along with those.
If you agree with that I'd love to hear any ideas for solutions I can implement that do not involve adding compilers as dependencies of our conda Python packages (which is an option open to us still IMHO, maybe via metapackages such as python-cxx-devel
). In fact, I should note that our r-base
package does depend upon our compilers on all 3 OSes (MSYS2/mingw-w64 ones on Windows) and I prefer that approach, at least for R. Some people disagree, and may even forcibly uninstall them, but IMHO they are ignoring how operating system loaders work and our recommendations as encoded in our dependencies and doing that at their own risk (another caveat, we have work to do to improve conda skeleton cran
around system libs and I may investigate hooking the install.packages() command some day, as I believe RStudio do, but that's off-tangent here, sorry).
Things need to work and they need to work well. Edge cases are not really something we can devote a lot of time to, we do intend our Pythons to be usable with system compilers if they are newer and ABI and whatever-special-elf-or-dylib-sections-we-need-in-our-binaries and using them does not pull in incompatible libraries, hence our willingness to explore some pretty egregious hacks like the two I have implemented here.
We cannot be considered completely responsible though for when our software links to "other people's software (and maybe even pulls in a load of other system libs that are incompatible with conda libs)" and things go wrong, therefore I cannot recommend it (but will try to fix it). Still, the message should be loud and clear. This is risky stuff.
BTW, did you try the sysconfig approach as suggested by @jjhelmus in build #1?
[1] This is to do with whether directories passed as -I/-L should have the prefix added to them or not, we say not, because the stuff we point to with -I and -L lives outside the sysroot => also note, we could explore ways to fudge that with bind mounts for example if that made things more common-or-garden, but really, that detail has always been ambiguous anyway and I don't think I added any patches to GCC to change it, we may have had to make a change to ld at some point though. I would also like to ping @nehaljwani on this subject as he looked at compilers more recently than I (in fact, @msarahan did builds of GCC 8 and @nehaljwani did builds of GCC 9 already, and we use those runtime compiler libs from GCC 9 quite happily with software compiled by GCC 7, but we're confident to do that because we are in full control of the stack).
Bug reports and/or links to them would be appreciated, sorry if these exist and I've missed them.
No worries - I have fixed them already. I've kept a log of common issues building NumPy and SciPy with Conda compilers at https://github.com/numpy/numpy/issues/13280. I think this ld
issue is the only one that's clearly a Conda issue rather than an issue in numpy.distutils
or another package.
That other tools or libs need some adjustment is often synonymous with them needing to be fixed for cross-compilation in general, when that isn't the case
Fully agree. This is part of why there are rough edges though. Cross-compiling is not commonly done, so it needs quite a bit of dogfooding before it's ready for general use. That's what I alluded to with "rough edges". All of distutils
isn't really designed properly for cross-compiling.
If you agree with that I'd love to hear any ideas for solutions I can implement
Yes, I agree (and I'm sure you've thought about this much harder than I have).
Not sure it needs extremely complicated solutions - clearer error messages would help a lot. And if you have to choose between supporting older or newer gcc's, it would make more sense (imho) to require newer compilers by default. The compiler_compat/ld
is now set up for old ones; instead just erroring out on old ones and let users either upgrade or explicitly install the current ld
would be an improvement.
we do intend our Pythons to be usable with system compilers if they are newer and ABI and whatever-special-elf-or-dylib-sections-we-need-in-our-binaries and using them does not pull in incompatible libraries, hence our willingness to explore some pretty egregious hacks like the two I have implemented here.
This sounds very reasonable.
BTW, did you try the sysconfig approach as suggested by @jjhelmus in build #1?
not yet, that will be a weekend project
All of distutils isn't really designed properly for cross-compiling
Well, our changes to sysconfigdata
are specifically meant to adress these matters (so long as you use our compilers of course .. but we could think to generalize it some and see if upstream cares for this or have better ideas (beyond a new platform tag which is something we cannot contemplate)).
Great, thanks for the useful discussion @rgommers. I may catch up with you on the weekend.
Installed conda install -c rdonnelly python
in a new env.
In the root of the numpy repo, doing python setup.py build_ext -i
finished, but running the tests or importing numpy fails. The (or a) reason is that the ABI flag looks wrong, example:
numpy/random/philox.cpython-@PYVERNODOTS@m-x86_64-linux-gnu.so
I just had this issue on a new install of miniconda with gcc 9 on ubuntu.
...
/home/eva/miniconda3/compiler_compat/ld: build/temp.linux-x86_64-3.7/regex_3/_regex.o: unable to initialize decompress status for section .debug_info
build/temp.linux-x86_64-3.7/regex_3/_regex.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for regex
Running setup.py clean for regex
Failed to build regex
...
I just had this issue on a new install of miniconda with gcc 9 on ubuntu.
... /home/eva/miniconda3/compiler_compat/ld: build/temp.linux-x86_64-3.7/regex_3/_regex.o: unable to initialize decompress status for section .debug_info build/temp.linux-x86_64-3.7/regex_3/_regex.o: file not recognized: file format not recognized collect2: error: ld returned 1 exit status error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for regex Running setup.py clean for regex Failed to build regex ...
you may try to rename /home/eva/miniconda3/compiler_compat/ld
to ld_old
. or just delete it.
I'm using arch, and this fixed my problem
Yeah, I did exactly that, too. I just wanted to chime in so that this issue gets more attention. Truth be told, I have found conda to be a powerful but buggy tool and no longer recommend it to beginners.
On Fri, Sep 6, 2019 at 10:48 AM Yongsheng Xu notifications@github.com wrote:
I just had this issue on a new install of miniconda with gcc 9 on ubuntu.
... /home/eva/miniconda3/compiler_compat/ld: build/temp.linux-x86_64-3.7/regex_3/_regex.o: unable to initialize decompress status for section .debug_info build/temp.linux-x86_64-3.7/regex_3/_regex.o: file not recognized: file format not recognized collect2: error: ld returned 1 exit status error: command 'gcc' failed with exit status 1
ERROR: Failed building wheel for regex Running setup.py clean for regex Failed to build regex ...
you may try to rename /home/eva/miniconda3/compiler_compat/ld to ld_old. or just delete it. I'm using arch, and this fixed my problem
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ContinuumIO/anaconda-issues/issues/11152?email_source=notifications&email_token=AIUL56XCSKBLIEJDBQQRIM3QIHY3ZA5CNFSM4IHHXD42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6B3PWA#issuecomment-528725976, or mute the thread https://github.com/notifications/unsubscribe-auth/AIUL56WB2M74IUXW4VDFNMTQIHY3ZANCNFSM4IHHXD4Q .
This has been fixed in python 3.7.4 already (for a long time). conda update python=3.7.4
should see you right. Let me know if that's not the case.
@NightMachinary good luck with whatever distro you settle on, they're all buggy (and conda has features far beyond most), that's the nature of software distributions. The environment is utterly chaotic.
This has not been fixed in Python 3.7.3 (I had just installed miniconda3 on the day I commented.). I’m happy with conda, but it remains that it breaks way too often to be suitable for beginners. Simpler tools with humbler ambitions will probably fare better for that use case.
On Fri, Sep 6, 2019 at 2:03 PM Ray Donnelly notifications@github.com wrote:
This has been fixed in python 3.7.
@NightMachinary https://github.com/NightMachinary good luck with whatever distro you settle on, they're all buggy, that's the nature of software distributions. The environment is utterly chaotic.
— You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/ContinuumIO/anaconda-issues/issues/11152?email_source=notifications&email_token=AIUL56WFHXJWSRSIXS6ABU3QIIPVDA5CNFSM4IHHXD42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6CJ46Q#issuecomment-528785018, or mute the thread https://github.com/notifications/unsubscribe-auth/AIUL56WLTACPG6BNR2YM5SDQIIPVDANCNFSM4IHHXD4Q .
This has been fixed in python 3.7.4 already (for a long time). conda update python=3.7.4 should see you right. Let me know if that's not the case.
Nope, still broken in exactly the same way for python 3.7.4 h265db76_1
, which is the most recent version as of right now.
Please let me know if you need more details or testing @mingwandroid.
I’m happy with conda, but it remains that it breaks way too often to be suitable for beginners. Simpler tools with humbler ambitions will probably fare better for that use case.
@NightMachinary the same can be said of pretty much any other tool. Just downloading the Anaconda distribution is still a very easy way to get started in Python land. That said, I would like to point out that comments like yours aren't exactly motivating for developers of an open source tool. Sometimes I'm frustrated too (with conda
, as well as pip
, linux
, macOS
, etc.), but I just take a deep breath and go for a walk, and then try to submit a constructive bug report or suggestion (or just let it go if I don't have time). I kindly suggest you to do the same.
True, true. :)
On Fri, Sep 6, 2019 at 11:36 PM Ralf Gommers notifications@github.com wrote:
This has been fixed in python 3.7.4 already (for a long time). conda update python=3.7.4 should see you right. Let me know if that's not the case.
Nope, still broken in exactly the same way for python 3.7.4 h265db76_1, which is the most recent version as of right now.
Please let me know if you need more details or testing @mingwandroid https://github.com/mingwandroid.
I’m happy with conda, but it remains that it breaks way too often to be suitable for beginners. Simpler tools with humbler ambitions will probably fare better for that use case.
@NightMachinary https://github.com/NightMachinary the same can be said of pretty much any other tool. Just downloading the Anaconda distribution is still a very easy way to get started in Python land. That said, I would like to point out that comments like yours aren't exactly motivating for developers of an open source tool. Sometimes I'm frustrated too (with conda, as well as pip, linux, macOS, etc.), but I just take a deep breath and go for a walk, and then try to submit a constructive bug report or suggestion (or just let it go if I don't have time). I kindly suggest you to do the same.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ContinuumIO/anaconda-issues/issues/11152?email_source=notifications&email_token=AIUL56QUVZGGPL2ZEDYPUOLQIKS33A5CNFSM4IHHXD42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6DY4JQ#issuecomment-528977446, or mute the thread https://github.com/notifications/unsubscribe-auth/AIUL56QDIW3AQXNT7A36ACLQIKS33ANCNFSM4IHHXD4Q .
I've been dealing with a gcc problem, that I think is related to this thread, and it's not resolved after performing.
conda install -c rdonnelly python=3.7.4
I'm using conda 4.7.12 and python 3.7.4.
My problem is that I have two identical .pyx files, based on the result of this command,
$ diff test.pyx test1.pyx
yet I can import only test.pyx, but not test1.pyx after running:
>>> import cython;import pyximport;pyximport.install()
(None, <pyximport.pyximport.PyxImporter object at 0x7fcacf6919d0>)
Both files are just:
import numpy
cimport numpy
cimport cython
Running this yields:
>>> import test1
...
/home/me/.pyxbld/temp.linux-x86_64-3.7/pyrex/test1.c:598:10: fatal error: numpy/arrayobject.h: No such file or directory
#include "numpy/arrayobject.h"
^~~~~~~~~~~~~~~~~~~~~
...
ImportError: Building module test1 failed: ["distutils.errors.CompileError: command 'gcc' failed with exit status 1\n"]
if I remember right, I created both test.pyx and test1.pyx in vi, though it's possible I didn't create them within the same virtual environment. I don't understand the importance of this, but I've realized test.pyx doesn't have a hidden .c file in the location where the one that doesn't work has one.
me@my_machine:~$ ls .pyxbld/temp.linux-x86_64-3.7/pyrex/ -a
. .. test1.c _test.c tested.c
Is this related to the problem you've all been trying to fix?
@JAngiolillo that's not related, and likely not even a bug. Just delete that generated test1.c
file. If it still fails, try slimming it down (e.g. does it still fail if you delete test.pyx
). And if it's a pyximport
bug, you will need to ask on the Cython issue tracker.
deleting the whole directory .pyxbld
fails to fix it. recreating the file doesn't either. somehow that one file test.pyx works whereas every subsequent .pyx fails.
(also, i think i've realized that import test
references a test module in the python3.7 directory rather than my local test.pyx file.)
will move to cython issue tracker. thanks.
I am just passing to say that I encountered a recurrent problem with miniconda3.
Installing software with pip in miniconda3 uses the compiler_compat/ld binary. This binutils binary is incompatible with new packages, such as binutils-2.32, elfutils-0.176 and gcc-8.3.0.
If your system is updated with the packages above and you try to pip install any package that uses GCC with debug information you get something in the lines of:
gcc ... -DNDEBUG ...
unable to initialize decompress status for section .debug_info
I solved by forcing a symbolic link to my system ld
instead of the defunct and outdated ld
within compiler_compat
folder.
I can confirm that this issue can be solved by renaming the compiler_compat/ld
for doing a pip install
and then renaming it back. I ran into the issue while buildling jpype in a conda environment.
Thank you for providing a solution!
Any updates on this? At this point I'd be happy with a simple env var that prevents installing compiler_compat/ld
or deletes it at the end of any new env creation.
Another place I came across this, it's in the FAQ of pytorch-geometric
I'm not sure if this is relevant or applicable here but I just like to point out that I had a similar issue (see https://github.com/AtomDB/pyatomdb/issues/18 ), and I fixed it by installing anacondas most recent compiler before the package installation: conda install -c anaconda gcc_linux-64
The explanation in https://github.com/pytorch/pytorch/issues/16683#issuecomment-459982988 explains, why this fix worked and I would naively think that it is probably also the safer fix in comparison to temporarily renaming conda's own ld
.
@alexkolo thanks for the suggestion. I know that's an option; it's not really relevant here since the whole point of compiler_compat/ld
is to provide compatibility with system compilers.
Any updates on this?
@mingwandroid and I (but mostly Ray) were exploring two possible options.
The first was making the vendored ld
a shell script that would examine the version of gcc to determine which linker, the system or the one for compatibility, to call.
The second, added similar logic to the _sysconfigdata file itself to strip out the -B PREFIX/compiler_compat
options depending on the gcc version.
Neither of these was complete enough to include in the packages in defaults
. If anyone wants to explore those more they seem promising.
With the current packages it should be possible to specify a custom sysconfigdata
file by pointing the _PYTHON_SYSCONFIGDATA_NAME
variable at it.
Alternatively, if the CC
, CXX
and LDSHARED
variable are set they should replace the defaults which come from the included sysconfigdata file which contain the -B PREFIX...
option and cause the use of compiler_compat/ld
.
3rd option is what conda-forge does. Ship the latest binutils version (2.33.1)
3rd option is what conda-forge does. Ship the latest binutils version (2.33.1)
I really like this idea. I'm curious if it solves the issue. I have not been able to replicate the issue locally recently so I can't test if updating binutils is sufficient. @rgommers or others do you have a sample case that I can use to test?
I've been able to reproduce the original issue and can confirm that the conda-forge Python package which has the binutils (2.33.1) does address the issue.
I've been using the following commands as a test. This creates a new conda environment and builds NumPy 1.17.4 and then runs the test suite:
conda create -y -n test python=3.7 pip
conda activate test
wget --quiet https://github.com/numpy/numpy/releases/download/v1.17.4/numpy-1.17.4.tar.gz
tar xf numpy-1.17.4.tar.gz
cd numpy-1.17.4
pip install -vvv -e .
pip install pytest
python -c "import numpy; numpy.test()"
This fails with packages from defaults
in a Arch Linux docker container with gcc 9.2.0 and binutils 2.33.1 with:
...
gcc -pthread -B /opt/conda/envs/test/compiler_compat -Wl,--sysroot=/ _configtest.o -o _configtest /opt/conda/envs/test/compiler_compat/ld: _configtest.o: unable to initialize decompress status for section .debug_info
/opt/conda/envs/test/compiler_compat/ld: _configtest.o: unable to initialize decompress status for section .debug_info /opt/conda/envs/test/compiler_compat/ld: _configtest.o: unable to initialize decompress status for section .debug_info
/opt/conda/envs/test/compiler_compat/ld: _configtest.o: unable to initialize decompress status for section .debug_info _configtest.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
...
If conda-forge's packages are used, (conda create -n test python pip -c conda-forge
) the build and tests pass.
Setting LDSHARED
and CC
to override the -B
versions of these variables from Python's sysconfig file can be done using:
export LDSHARED="gcc -pthread -shared -L/opt/conda/envs/test/lib -Wl,-rpath=/opt/conda/envs/test/lib -Wl,--no-as-needed -Wl,--sysroot=/"
export CC="gcc"
With these variable set the build completes and the test pass using packages from defaults
.
For another data point, the build and tests pass using packages from either channel if run in a Ubuntu 19.10 docker container with the gcc 9.2.1 and ld 2.33.
The error message itself seems similar to one described on gentoo wiki for upgrading binutils 2.32 which suggests this may be a result of a bug in binutils that was fixed in 2.32 (the compiler_comp ld is 2.31.1) along with interactions with elfutils >=0.175.
I'm still puzzled as to why everything works on Ubuntu 19.10.
Similar issue here: my environment's ld
wasn't as new as my system's ld
, causing issues like those reported above:
/.../miniconda3/envs/my_env/compiler_compat/ld: build/temp.linux-x86_64-3.7/freud/box.o: unable to initialize decompress status for section .debug_info
build/temp.linux-x86_64-3.7/freud/box.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
error: command '/usr/bin/g++' failed with exit status 1
Environment compiler_compat/ld
: GNU ld (crosstool-NG 1.23.0.444-4ea7) 2.31.1
System ld
: GNU ld (GNU Binutils) 2.33.1
Renaming my environment's compiler_compat/ld
to compiler_compat/ld_old
fixed the problem and allowed me to build Cython packages again.
From a user perspective, I hope my process in attempting to resolve this problem can help to improve the corresponding docs. I saw the error came from .../compiler_compat/ld
, so I realized that my system's ld
wasn't being used. Then I read through quite a bit of documentation and did several internet searches to figure out if there was a way to update the compiler_compat
. If my understanding is correct, the compiler_compat
is set by the version of python
in the environment? I didn't know that until finding this thread. If that's accurate, that would be great to mention in the compiler_compat/README
file. I thought there would be a package called compiler_compat
, or that it would be resolved by conda update --all
.
Also, the compiler_compat/README
file points to https://github.com/conda/conda/issues/6030 but I think this issue page was more helpful in identifying the problem and solutions.
The latest release of Python 3.x on the defaults
channel, 3.6.10, 3.7.6 and 3.8.1, use a symlink to the executable ld_impl_
package for compiler_compat/ld as conda-forge uses. The ld_impl_
packages are from the latest release of binutils.
I've tested these Python packages with the Arch + NumPy example from above and there were no linker issues.
I think this is a reasonable fix for this issue, big thanks to @isuruf for working on it in conda-forge.
I'm going to close this issue. If there are still problems with compiler_compat/ld with these new python packages please comment in this issue and I will re-open the issue.
Thanks @jjhelmus! Apologies for the delayed reply; I confirm it's working as expected now.
@jjhelmus, I think I only fixed 3.7 and 3.8 on conda-forge. Can you send your changes for 3.6?
I've just had this issue when creating an environment. It seems that the environment also has its own compiler_compat/ld. I had to rename it to let the pip use my system's ld, which is newer.
Update: Putting "gcc_linux-64" in the list of dependencies of the environment.yml solves the problem.
A brief summary of potential solutions for this issue.
cd /path/to/your/conda/env/compiler_compat/ && mv ld ld.bak
# or just remove it if you don't care.
https://github.com/ContinuumIO/anaconda-issues/issues/11152#issuecomment-573120962
conda install -c conda-forge ld_impl_linux-64 # Modify the suffix to correspond with your platform
I have tested both solutions and they do solve my problem.
Actual Behavior
On Linux installing the Anaconda distribution for Python 3.7 via the regular installer installs
compiler_compat/ld
. That breaks building Python extensions with recent/normal versions ofgcc
, and has done so for a long time. To reproduce:Full build failure:
Expected Behavior
compile should work
Steps to Reproduce
See above. I have seen this on a number of Linux versions (Arch, Antergos, Manjaro at least) and GCC versions.
Anaconda or Miniconda version:
Anaconda3-2019.03-Linux-x86_64.sh
Operating System:
64-bit Linux (Arch, but happens on other distros as well)
conda info
conda list --show-channel-urls